首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In most industrial processes, vast amounts of data are recorded through their distributed control systems (DCSs) and emergency shutdown (ESD) systems. This two‐part article presents a dynamic risk analysis methodology that uses alarm databases to improve process safety and product quality. The methodology consists of three steps: (i) tracking of abnormal events over an extended period of time, (ii) event‐tree and set‐theoretic formulations to compact the abnormal‐event data, and (iii) Bayesian analysis to calculate the likelihood of the occurrence of incidents. Steps (i) and (ii) are presented in Part I and step (iii) in Part II. The event‐trees and set‐theoretic formulations allow compaction of massive numbers (millions) of abnormal events. For each abnormal event, associated with a process or quality variable, its path through the safety or quality systems designed to return its variable to the normal operation range is recorded. Event trees are prepared to record the successes and failures of each safety or quality system as it acts on each abnormal event. Over several months of operation, on the order of 106 paths through event trees are stored. The new set‐theoretic structure condenses the paths to a single compact data record, leading to significant improvement in the efficiency of the probabilistic calculations and permitting Bayesian analysis of large alarm databases in real time. As a case study, steps (i) and (ii) are applied to an industrial, fluidized‐catalytic‐cracker. © 2011 American Institute of Chemical Engineers AIChE J, 2012  相似文献   

2.
Plant-specific dynamic failure assessment using Bayesian theory   总被引:1,自引:0,他引:1  
Abnormal events of varying magnitudes result in incipient faults, near-misses, incidents, and accidents in chemical plants. Their detection and diagnosis has been an active area of research [Venkatasubramanian, V., Rengaswamy, R., Kavuri, S.N., 2003a. A review of process fault detection and diagnosis, Part II: Quantitative model and search strategies. Computers and Chemical Engineering 27(3), 313-326; Venkatasubramanian, V., Rengaswamy, R., Kavuri, S.N., Yin, K., 2003b. A review of process fault detection and diagnosis, Part III: Process history based methods. Computers and Chemical Engineering 27(3), 327-346]. However, estimation of the failure probabilities of safety systems to predict these consequences (end-states), has received little attention in the CPI. In this work, methods for plant-specific, dynamic risk assessment are developed to predict the frequencies of abnormal events utilizing accident precursor data, helping to achieve inherently safer operations. These methods, which involve repetitive risk analysis after abnormal events occur, are especially beneficial for operations involving complex nonlinearities and multi-component interactions. Herein, the failure probabilities of safety systems and end-states are estimated using copulas and Bayesian analysis to ensure better predictions. The joint probability distribution for the failure probability of a safety system(s) having different consequences is modeled using the Cuadras and Auges copula [Nelsen, R.B., 1999. An Introduction to Copulas. Lecture Notes in Statistics, Springer, New York]. Accident precursor data are used to modify dynamically the initial estimates of failure probabilities to obtain posterior failure probabilities of the safety systems of an exothermic reactor. Finally, fuzzy memberships to various critical zones are formulated as a function of end-state probabilities to judge the safety status of a chemical plant.  相似文献   

3.
Dynamic risk analysis (DRA) has been used widely to analyze the performance of alarm and safety interlock systems of manufacturing processes. Because the most critical alarm and safety interlock systems are rarely activated, little or no data from these systems are often available to apply purely‐statistical DRA methods. Moskowitz et al. (2015)1 introduced a repeated‐simulation, process‐model‐based technique for constructing informed prior distributions, generating low‐variance posterior distributions for Bayesian analysis,1 and making alarm‐performance predictions. This article presents a method of quantifying process model quality, which impacts prior and posterior distributions used in Bayesian Analysis. The method uses higher‐frequency alarm and process data to select the most relevant constitutive equations and assumptions. New data‐based probabilistic models that describe important special‐cause event occurrences and operators’ response‐times are proposed and validated with industrial plant data. These models can be used to improve estimates of failure probabilities for alarm and safety interlock systems. © 2016 American Institute of Chemical Engineers AIChE J, 62: 3461–3472, 2016  相似文献   

4.
Abstract. The dependence structure in multivariate financial time series is of great importance in portfolio management. By studying daily return histories of 17 exchange‐traded index funds, we identify important features of the data, and we propose two new models to capture these features. The first is an extension of the multivariate BEKK (Baba, Engle, Kraft, Kroner) model, which includes a multivariate t‐type error distribution with different degrees of freedom. We demonstrate that this error distribution is able to accommodate different levels of heavy‐tailed behaviour and thus provides a better fit than models based on a multivariate t‐with a common degree of freedom. The second model is copula based, and can be regarded as an extension of the standard and the generalized dynamic conditional correlation model [Engle, Journal of Business and Economics Statistics (2002) Vol. 17, 425–446; Cappiello et al. (2003) Working paper, UCSD] to a Student copula. Model comparison is carried out using criteria including the Akaike information criteria and Bayesian information criteria. We also evaluate the two models from an asset‐allocation perspective using a three‐asset portfolio as an example, constructing optimal portfolios based on the Markowitz theory. Our results indicate that, for our data, the proposed models both outperform the standard BEKK model, with the copula model performing better than the extension of the BEKK model.  相似文献   

5.
This work addresses the problem of estimating complete probability density functions (PDFs) from historical process data that are incomplete (lack information on rare events), in the framework of Bayesian networks. In particular, this article presents a method of estimating the probabilities of events for which historical process data have no record. The rare‐event prediction problem becomes more difficult and interesting, when an accurate first‐principles model of the process is not available. To address this problem, a novel method of estimating complete multivariate PDFs is proposed. This method uses the maximum entropy and maximum likelihood principles. It is tested on mathematical and process examples, and the application and satisfactory performance of the method in risk assessment and fault detection are shown. Also, the proposed method is compared with a few copula methods and a nonparametric kernel method, in terms of performance, flexibility, interpretability, and rate of convergence. © 2014 American Institute of Chemical Engineers AIChE J, 60: 1013–1026, 2014  相似文献   

6.
Integrated safety analysis of hazardous process facilities calls for an understanding of both stochastic and topological dependencies, going beyond traditional Bayesian Network (BN) analysis to study cause-effect relationships among major risk factors. This paper presents a novel model based on the Copula Bayesian Network (CBN) for multivariate safety analysis of process systems. The innovation of the proposed CBN model is in integrating the advantage of copula functions in modelling complex dependence structures with the cause-effect relationship reasoning of process variables using BNs. This offers a great flexibility in probabilistic analysis of individual risk factors while considering their uncertainty and stochastic dependence. Methods based on maximum likelihood evaluation and information theory are presented to learn the structure of CBN models. The superior performance of the CBN model and its advantages compared to traditional BN models are demonstrated by application to an offshore managed pressure drilling case study.  相似文献   

7.
Economic evaluation of health care interventions based on decision analytic modelling can generate valuable information for health policy decision makers. However, the usefulness of the results obtained depends on the quality of the data input into the model; that is, the accuracy of the estimates for the costs, effectiveness, and transition probabilities between the different health states of the model. The aim of this paper is to review the use of Bayesian decision models in economic evaluation and to demonstrate how the individual components required for decision analytical modelling (i.e., systematic review incorporating meta-analyses, estimation of transition probabilities, evaluation of the model, and sensitivity analysis) may be addressed simultaneously in one coherent Bayesian model evaluated using Markov Chain Monte Carlo simulation implemented in the specialist Bayesian statistics software WinBUGS. To illustrate the method described, a simple probabilistic decision model is developed to evaluate the cost implications of using prophylactic antibiotics in caesarean section to reduce the incidence of wound infection. The advantages of using the Bayesian statistical approach outlined compared to the conventional classical approaches to decision analysis include the ability to: (i) perform all necessary analyses, including all intermediate analyses (e.g., meta-analyses) required to derive model parameters, in a single coherent model; (ii) incorporate expert opinion either directly or regarding the relative credibility of different data sources; (iii) use the actual posterior distributions for parameters of interest (opposed to making distributional assumptions necessary for the classical formulation); and (iv) incorporate uncertainty for all model parameters.  相似文献   

8.
Lorentz correction is used to correct the intensities of X‐ray scattering of single‐crystal diffractometry in order to recalculate intensities to obtain structure factors. This correction reduces the intensities to zero at zero diffraction angle. Small‐angle scattering is used to study the dimensions of heterogeneities in polymeric materials. The scattering intensities at a near to zero scattering angle originate partly from periodic systems (reciprocal lattice) and partly from dispersed particle systems. Periodic systems should result in individual Gaussian or Lorentzian peaks with the position of a peak maximum depending on the length of the periodicity. Particle scattering results in a Gaussian peak centered at zero scattering angle. The effect of the Lorentz correction on the interpretation of small‐angle X‐ray scattering data is shown for some semicrystalline polyethylenes (high‐density, linear low‐density, and low‐molecular‐weight waxy polyethylenes). The data are compared to those for amorphous block copolymers (styrene–butadiene), in which there is a periodic system with homogeneous lamellar thickness. Lorentz correction destroys the characteristics of the particle scattering and can be applied only for periodic systems. It should not be used to produce a peak on scattering data, which do not show periodicity (peaks) without correction. © 2001 John Wiley & Sons, Inc. J Appl Polym Sci 80: 358–366, 2001  相似文献   

9.
Just‐in‐time (JIT) learning methods are widely used in dealing with nonlinear and multimode behavior of industrial processes. The locally weighted partial least squares (LW‐PLS) method is among the most commonly used JIT methods. The performance of LW‐PLS model depends on parameters of the similarity function as well as the structure and parameters of the local PLS model. However, the regular LW‐PLS algorithm assumes that the parameters of the similarity function and structure of the local PLS model are known and do not fully utilize available knowledge to estimate the model parameters. A Bayesian framework is proposed to provide a systematic way for real‐time parameterization of the similarity function, selection of the local PLS model structure, and estimation of the corresponding model parameters. By applying the Bayes' theorem, the proposed framework incorporates the prior knowledge into the identification process and takes into account the different contribution of measurement noises. Furthermore, Bayesian model structure selection can automatically deal with the model complexity problem to avoid the overfitting issue. The advantages of this new approach are highlighted through two case studies based on the real‐world near infrared data. © 2014 American Institute of Chemical Engineers AIChE J, 61: 518–529, 2015  相似文献   

10.
Although fires can easily occur during cotton storage, research on cotton storage fire risk assessment is limited. This work focuses on cotton storage fire risk assessment and investigates the criticality of risk control strategies. Bow-tie and Bayesian network models are established to investigate the relationships among accident causes, safety barriers, and possible consequences. The results show that the first safety barrier (detection and extinguishment before fire brigade arrival) is more controllable and more effective than the second safety barrier (fire brigade). Based on the collected probability data, the probability and risk of a common accident are higher than those of a large accident and severe accident when safety barriers succeed; when the first safety barrier fails, the probabilities and risks of large and severe accidents increase by more than 2000 times. The criticality of safety measures is investigated by analysing their structural importance, probability importance, and critical importance. The critical events for fire occurrence are an open flame and sparks during storage, and the critical events for detection and extinguishment before fire brigade arrival are watchkeeper monitoring, regular patrolling, and automatic fire alarm systems. For cotton storage safety, this work and its outcomes are used to support the decision-making of fire risk prevention and control.  相似文献   

11.
Crystallite shape ellipsoid in different varieties of silk fibers namely (i) Chinese (ii) Indian, and (iii) Japanese, has been computed using wide‐angle X‐ray data and Hosemann's one‐dimensional paracrystalline model. The estimated microcrystalline parameters are correlated with the observed physical property of the silk fibers. © 2001 John Wiley & Sons, Inc. J Appl Polym Sci 79: 1979–1985, 2001  相似文献   

12.
A method of designing model‐predictive safety systems that can detect operation hazards proactively is presented. Such a proactive safety system has two major components: a set of operability constraints and a robust state estimator. The safety system triggers alarm(s) in real time when the process is unable to satisfy an operability constraint over a receding time‐horizon into the future. In other words, the system uses a process model to project the process operability status and to generate alarm signals indicating the presence of a present or future operation hazard. Unlike typical existing safety systems, it systematically accounts for nonlinearities and interactions among process variables to generate alarm signals; it provides alarm signals tied to unmeasurable, but detectable, state variables; and it generates alarm signals before an actual operation hazard occurs. The application and performance of the method are shown using a polymerization reactor example. © 2016 American Institute of Chemical Engineers AIChE J, 62: 2024–2042, 2016  相似文献   

13.
With the growing complexity of industrial processes, the scale of production processes tends to be large. The significant amount of measurement data in large‐scale processes poses challenges in data collection, management, and storage. In order to perform effective process monitoring in large‐scale processes, the distributed process monitoring strategy is widely applied. Meanwhile, product quality is an important indicator for industrial production. Therefore, a novel quality‐based distributed process monitoring scheme is proposed. Firstly, the Girvan‐Newman (GN) algorithm in complex network divides process variables into multiple sub‐blocks. Secondly, greedy algorithm‐based high‐dimensional mutual information (HDMI) is used to extract quality‐related variables in each sub‐block, through which the irrelevant and redundant variables are eliminated. Thirdly, the decomposed modified partial least squares (DMPLS) approach is used to detect whether a fault is quality‐related or not in each sub‐block. Finally, the Bayesian inference strategy is adopted to combine the detection results of all sub‐blocks. The effectiveness of the distributed DMPLS approach is illustrated through a numerical simulation and the Tennessee Eastman (TE) process. The results show the superiority of our proposed monitoring scheme.  相似文献   

14.
D-vine copulas混合模型及其在故障检测中的应用   总被引:2,自引:1,他引:1       下载免费PDF全文
郑文静  李绍军  蒋达 《化工学报》2017,68(7):2851-2858
过程监控技术是保证现代流程工业安全平稳运行及产品质量的有效手段。传统的过程监控方法大多采用维度约简方法提取数据特征,且要求过程数据必须服从高斯分布、线性等限制条件,对复杂工况条件下发生的故障难以取得较好的检测效果。因此,提出了混合D-vine copulas故障诊断模型,在不降维的情况下直接刻画数据中存在的复杂相关关系,构建过程变量的统计模型实现对存在非线性与非高斯性过程的精确描述。通过EM算法和伪极大似然估计优化混合模型参数,然后结合高密度区域(HDR)与密度分位数法等理论,构建广义贝叶斯概率(GBIP)指标实现对过程的实时监测。数值例子及在TE过程上的仿真结果说明了该混合模型的有效性及在故障检测中的良好性能。  相似文献   

15.
A mussel‐inspired adhesive hydrogel with pH, temperature, and near‐infrared (NIR) light–responsive behavior is designed. The hydrogel system is formulated by combining chitosan modified with catechol motifs, thermo‐responsive poly(N‐isopropylacrylamide) terminated with catechols, and light‐absorbing polypyrrole nanoparticles (PpyNPs). The effects of catechol concentration, molar ratio of Fe3+ to catechol units, pH, and the incorporation of the PpyNPs on the mechanical property of the formed hydrogel are investigated. The responsive behaviors of the resulting hydrogel composite to pH, temperature, and NIR light are also demonstrated. The obtained hydrogel also shows promising adhesive property to glass and steel substrates. It is anticipated that the fabricated adhesive hydrogel with multi‐responsive behavior, especially NIR light response, can potentially be useful in a wide range of biomedical applications, such as remotely controlled release systems and removable sealant materials.  相似文献   

16.
A novel data‐driven adaptive robust optimization framework that leverages big data in process industries is proposed. A Bayesian nonparametric model—the Dirichlet process mixture model—is adopted and combined with a variational inference algorithm to extract the information embedded within uncertainty data. Further a data‐driven approach for defining uncertainty set is proposed. This machine‐learning model is seamlessly integrated with adaptive robust optimization approach through a novel four‐level optimization framework. This framework explicitly accounts for the correlation, asymmetry and multimode of uncertainty data, so it generates less conservative solutions. Additionally, the proposed framework is robust not only to parameter variations, but also to anomalous measurements. Because the resulting multilevel optimization problem cannot be solved directly by any off‐the‐shelf solvers, an efficient column‐and‐constraint generation algorithm is proposed to address the computational challenge. Two industrial applications on batch process scheduling and on process network planning are presented to demonstrate the advantages of the proposed modeling framework and effectiveness of the solution algorithm. © 2017 American Institute of Chemical Engineers AIChE J, 63: 3790–3817, 2017  相似文献   

17.
High pressure Raman spectroscopy measurements in a diamond anvil cell (0–10 GPa) on 2‐nitropropane/nitric acid/X (X=triethylamine, diethylamine, and water) ternary systems and 2‐nitropropane/nitric acid/water/Y (Y=triethylamine and diethylamine) quaternary systems are reported. The modifications of the chemical behavior of the 2‐nitropropane/nitric acid model system, induced by the presence of triethylamine, diethylamine, and/or water, were studied at ambient and high pressure. At ambient pressure, the ionization of the nitric acid has been observed with each of the additives. Moreover, in the case of ethylamines, new peaks have been observed and the hypothesis of a 2‐nitropropane/ethylamine complex is advanced. At high pressure, the decomposition of the 2‐nitropropane/nitric acid system, with an oxygen balance near zero, has been observed only in presence of triethylamine. The role of each additive to the 2‐nitropropane/nitric acid system in the modification of the respective reducing and oxidizing character of the components, and in the reactivity of the system, is discussed. Several hypotheses are advanced concerning the sensitizing effect of the additives on the 2‐nitropropane/nitric acid system.  相似文献   

18.
In this paper, a probabilistic combination form of the local independent component regression (ICR) model is proposed for quality prediction of chemical processes with multiple operation modes. Through the introduction of the Bayesian inference strategy, the posterior probabilities of the data sample in different operation modes are calculated upon two monitoring statistics of the independent component analysis (ICA) model. Then, based on the combination of local ICR models in different operation modes, a probabilistic multiple ICR (MICR) model is developed. Meanwhile, the operation mode information of the data sample is located through posterior analysis of the new model. To evaluate the multimode quality prediction performance of the proposed method, two case studies are provided.  相似文献   

19.
Data‐driven models are widely used in process industries for monitoring and control purposes. No matter what kind of models one chooses, model‐plant mismatch always exists; it is, therefore, important to implement model update strategies using the latest observation information of the investigated process. In practice, multiple observation sources such as frequent but inaccurate or accurate but infrequent measurements coexist for a same quality variable. In this article, we show how the flexibility of the Bayesian approach can be exploited to account for multiple‐source observations with different degrees of belief. A practical Bayesian fusion formulation with time‐varying variances is proposed to deal with possible abnormal observations. A sequential Monte Carlo sampling based particle filter is used for simultaneously handling systematic and nonsystematic errors (i.e., bias and noise) in the presence of process constraints. The proposed method is illustrated through a simulation example and a data‐driven soft sensor application in an oil sands froth treatment process. © 2010 American Institute of Chemical Engineers AIChE J, 57: 1514–1525, 2011  相似文献   

20.
Lorentz correction is used to correct the intensities of X‐ray scattering of single crystal diffractometry in order to recalculate intensities to obtain structure factors. This correction reduces the intensities to zero at zero diffraction angle. Small angle scattering is used to study the dimensions of heterogeneities in polymeric materials. The scattering intensities near to zero scattering angle originate partly from periodic systems (reciprocal lattice) and partly from dispersed particle systems. Periodic systems should result in individual Gaussian or Lorentzian peaks with the position of a peak maximum depending on the length of the periodicity. Particle scattering results in a Gaussian peak centred at zero scattering angle. The effect of the Lorentz correction on the interpretation of small angle X‐ray scattering data is shown in the case of some semicrystalline polyethylenes (high density, linear low density, and low molecular weight waxy polyethylenes). The data are compared with those for amorphous block copolymers (styrene/butadiene) in which there is a periodic system with homogeneous lamellar thickness. Lorentz correction destroys the characteristics of the particle scattering and can be applied only for periodic systems. It should not be used to produce a peak on scattering data which does not show periodicity (peaks) without correction. © 2001 John Wiley & Sons, Inc. J Appl Polym Sci 80: 2300–2308, 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号