首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Optimization of technological processes depends on the relevant process design, properly selected column internals and sufficient understanding of the process behavior. This can only be achieved with the help of accurate and reliable process models. Along these lines, the present article suggests a new modelling concept – complementary modeling – for a large class of fluid engineering processes. Due to diversity of process conditions and criteria, it is impossible to develop a unified modelling approach. Instead, an efficient combination of different modeling approaches is advantageous. The complementary modeling is discussed in detail and illustrated with several case studies.  相似文献   

2.
Complementary modelling of fluid separation processes   总被引:1,自引:0,他引:1  
Optimal functioning of numerous technological processes depends primarily on relevant process design, properly selected column internals and sufficient understanding of the process behaviour. This can be achieved only with the help of accurate and reliable process models capable of considering process rates in a rigorous way, with respect to both transport phenomena and chemistry. In this article, a new modelling concept called complementary modelling is suggested for a large class of fluid separation processes. Since the conditions and criteria for these processes vary considerably, it is impossible to develop a unified modelling approach. Instead, a reasonable and effective combination of different modelling approaches provides solutions to many present and future tasks. The complementary modelling is discussed in detail and illustrated with several case studies.  相似文献   

3.
In internal rubber‐mixing processes, data‐driven soft sensors have become increasingly important for providing online measurements for the Mooney viscosity information. Nevertheless, the prediction uncertainty of the model has rarely been explored. Additionally, traditional viscosity prediction models are based on single models and, thus, may not be appropriate for complex processes with multiple recipes and shifting operating conditions. To address both problems simultaneously, we propose a new ensemble Gaussian process regression (EGPR)‐based modeling method. First, several local Gaussian process regression (GPR) models were built with the training samples in each subclass. Then, the prediction uncertainty was adopted to evaluate the probabilistic relationship between the new test sample and several local GPR models. Moreover, the prediction value and the prediction variance was generated automatically with Bayesian inference. The prediction results in an industrial rubber‐mixing process show the superiority of EGPR in terms of prediction accuracy and reliability. © 2014 Wiley Periodicals, Inc. J. Appl. Polym. Sci. 2015 , 132, 41432.  相似文献   

4.
The composition and quantity of styrene-maleic anhydride (SMA) copolymer resins were varied in emulsion copolymerizations of methyl methacrylate and n-butyl acrylate conducted by both batch and semicontinuous processes. The resulting particle sizes and levels of coagulum were measured to determine the optimum conditions for incorporation of the SMA resins into the resulting latexes. A semicontinuous process, in which no buffer was included and the SMA was added in a second stage comonomer emulsion, was found to produce coagulum-free latexes. These recipes, however, relied on nucleation of the polymer particles by conventional surfactants [nonyl phenol poly(ethylene) oxide and its corresponding sulfate salt] with a first-stage addition of a monomer emulsion prepared with these surfactants. SMA1000, having a 1/1 ratio of styrene to maleic anhydride in its copolymer, was determined to be the preferred resin (as opposed to SMA2000 and SMA3000, having SMA ratios of 2/1 and 3/1, respectively) because it interacted the least with conventional surfactants, which allowed its ready incorporation into coagulum-free recipes. © 1998 John Wiley & Sons, Inc. J Appl Polym Sci 70: 2729–2747, 1998  相似文献   

5.
Gasoline blending is a critical process with a significant impact on the total revenues of oil refineries. It consists of mixing several feedstocks coming from various upstream processes and small amounts of additives to make different blends with some specified quality properties. The major goal is to minimize operating costs by optimizing blend recipes, while meeting product demands on time and quality specifications. This work introduces a novel continuous‐time mixed‐integer linear programming (MILP) formulation based on floating time slots to simultaneously optimize blend recipes and the scheduling of blending and distribution operations. The model can handle non‐identical blenders, multipurpose product tanks, sequence‐dependent changeover costs, limited amounts of gasoline components, and multi‐period scenarios. Because it features an integrality gap close to zero, the proposed MILP approach is able to find optimal solutions at much lower computational cost than previous contributions when applied to large gasoline blend problems. © 2016 American Institute of Chemical Engineers AIChE J, 62: 3002–3019, 2016  相似文献   

6.
A new approach for modeling and monitoring of the multivariate processes in presence of faulty and missing observations is introduced. It is assumed that operating modes of the process can transit to each other following a Markov chain model. Transition probabilities of the Markov chain are time varying as a function of the scheduling variable. Therefore, the transition probabilities will be able to vary adaptively according to different operating modes. In order to handle the problem of missing observations and unknown operating regimes, the expectation maximization algorithm is used to estimate the parameters. The proposed method is tested on two simulations and one industrial case studies. The industrial case study is the abnormal operating condition diagnosis in the primary separation vessel of oil‐sand processes. In comparison to the conventional methods, the proposed method shows superior performance in detection of different operating conditions of the process. © 2014 American Institute of Chemical Engineers AIChE J, 61: 477–493, 2015  相似文献   

7.
This paper is concerned with the development, simulation and experimental validation of a detailed antisolvent crystallization model. A population balance approach is adopted to describe the dynamic change of particle size in crystallization processes under the effect of antisolvent addition. Maximum likelihood method is used to identify the nucleation and growth kinetic models using data derived from controlled experiments. The model is then validated experimentally under a new solvent feedrate profile and showed to be in good agreement. The resulting model is directly exploited to understand antisolvent crystallization behavior under varying antisolvent feeding profiles. More significantly, the model is proposed for the subsequent step of model-based optimization to readily develop optimal antisolvent feeding recipes attractive for pharmaceutical and chemicals crystallization operations.  相似文献   

8.
The one-dimensional dispersion model has been solved analytically as well as numerically to describe flow in continuous “closed” boundary systems using the celebrated Danckwerts boundary conditions. Nevertheless, a continuous state stochastic approach can sometimes be more appropriate especially in cases when input fluctuations are of the same order as the time scale of the system and in such cases an accurate treatment of the boundary conditions is indispensable for the successful application of the method. A deterministic approach was carried out in which the differential equation was solved using Fourier's method and the Laplace transform. These solutions were used as a yardstick to assess the precision of the stochastic solution with its proposed boundary conditions conforming to Danckwerts’ boundary conditions. Our problem is somehow simplified if we assume that the convection term and the dispersion term are constants independent of space and time. A stochastic differential equation was thus employed, governed by the Wiener process and solved using the Euler-Maruyama method.  相似文献   

9.
We focus on output feedback control of distributed processes whose infinite dimensional representation in appropriate Hilbert subspaces can be decomposed to finite dimensional slow and infinite dimensional fast subsystems. The controller synthesis issue is addressed using a refined adaptive proper orthogonal decomposition (APOD) approach to recursively construct accurate low dimensional reduced order models (ROMs) based on which we subsequently construct and couple almost globally valid dynamic observers with robust controllers. The novelty lies in modifying the data ensemble revision approach within APOD to enlarge the ROM region of attraction. The proposed control approach is successfully used to regulate the Kuramoto‐Sivashinsky equation at a desired steady state profile in the absence and presence of uncertainty when the unforced process exhibits nonlinear behavior with fast transients. The original and the modified APOD approaches are compared in different conditions and the advantages of the modified approach are presented. © 2013 American Institute of Chemical Engineers AIChE J, 59: 4595–4611, 2013  相似文献   

10.
A novel networked process monitoring, fault propagation identification, and root cause diagnosis approach is developed in this study. First, process network structure is determined from prior process knowledge and analysis. The network model parameters including the conditional probability density functions of different nodes are then estimated from process operating data to characterize the causal relationships among the monitored variables. Subsequently, the Bayesian inference‐based abnormality likelihood index is proposed to detect abnormal events in chemical processes. After the process fault is detected, the novel dynamic Bayesian probability and contribution indices are further developed from the transitional probabilities of monitored variables to identify the major faulty effect variables with significant upsets. With the dynamic Bayesian contribution index, the statistical inference rules are, thus, designed to search for the fault propagation pathways from the downstream backwards to the upstream process. In this way, the ending nodes in the identified propagation pathways can be captured as the root cause variables of process faults. Meanwhile, the identified fault propagation sequence provides an in‐depth understanding as to the interactive effects of faults throughout the processes. The proposed approach is demonstrated using the illustrative continuous stirred tank reactor system and the Tennessee Eastman chemical process with the fault propagation identification results compared against those of the transfer entropy‐based monitoring method. The results show that the novel networked process monitoring and diagnosis approach can accurately detect abnormal events, identify the fault propagation pathways, and diagnose the root cause variables. © 2013 American Institute of Chemical Engineers AIChE J, 59: 2348–2365, 2013  相似文献   

11.
A hybrid modeling approach is proposed for modeling and identification of an uniformly sampled continuous process in a closed loop where noises exist. This approach combines the determination of an ARMAX model with the determination of a continuous model as an integrated modeling procedures. The resulted model is designated as the HM(n,m,s) model. By using this modeling approach, the identifiability of the HM(n,m,s) model will be regardless to the regulator that is being used in the closed loop. Besides, the HM(n,m,s) model provides more accurate dynamic information than those of the conventionally estimated ARMAX model, which may be quite misleading sometimes. A systematic modeling strategy is proposed to obtain an adequate low order model. Numerical examples that use the simulated data and the proposed HM(n,m,s) modeling approach are also given. The results of the examples reveal that the advantage of the HM(n,m,s) modeling approach is remarkable.  相似文献   

12.
Two methodological improvements of the design of dynamic experiments (C. Georgakis, Ind Eng Chem Res. 2013) for the modeling and optimization of (semi‐) batch processes are proposed. Their effectiveness is evaluated in two representative classes of biopharmaceutical processes. First, we incorporate prior process knowledge in the design of the experiments. Many batch processes and, in particular, biopharmaceutical processes are usually not understood completely to enable the development of an accurate knowledge‐driven model. However, partial process knowledge is often available and should not be ignored. We demonstrate here how to incorporate such knowledge. Second, we introduce an evolutionary modeling and optimization approach to minimize the initial number of experiments in the face of budgetary and time constraints. The proposed approach starts with the estimation of only a linear Response Surface Model, which requires the minimum number of experiments. Accounting for the model's uncertainty, the proposed approach calculates a process optimum that meets a maximum uncertainty constraint. © 2017 American Institute of Chemical Engineers AIChE J, 63: 2796–2805, 2017  相似文献   

13.
This article presents a regression‐based monitoring approach for diagnosing abnormal conditions in complex chemical process systems. Such systems typically yield process variables that may be both Gaussian and non‐Gaussian distributed. The proposed approach utilizes the statistical local approach to monitor parametric changes of the latent variable model that is identified by a revised non‐Gaussian regression algorithm. Based on a numerical example and recorded data from a fluidized bed reactor, the article shows that the proposed approach is more sensitive when compared to existing work in this area. A detailed analysis of both application studies highlights that the introduced non‐Gaussian monitoring scheme extracts latent components that provide a better approximation of non‐Gaussian source signal and/or is more sensitive in detecting process abnormities. © 2013 American Institute of Chemical Engineers AIChE J, 60: 148–159, 2014  相似文献   

14.
褚菲  程相  代伟  赵旭  王福利 《化工学报》2018,69(6):2567-2575
提出了一种基于过程迁移的间歇过程质量预报方法,旨在解决新间歇过程数据不足难以建立准确预报模型的问题。该方法基于多元统计回归分析模型,通过构建相似间歇过程间的共同潜变量空间,将已有相似间歇过程的数据信息迁移应用到未建模的新间歇过程中,实现新间歇过程的快速建模和质量预报。在线应用时,利用在线数据不断更新过程迁移模型;同时,实时估计模型预测误差的置信区间,判断预报模型预测误差的稳定性;为克服相似过程间可能存在的差异给迁移模型带来的不利影响,根据数据相似度逐步剔除相似间歇过程数据。最后,通过仿真实验验证了所提方法的有效性。  相似文献   

15.
The choice between alternative dye recipes to match the same colour using reactive dyes on cotton does not reduce simply to comparing the costs of the individual dyes and their proportions in the recipes. Often the conditions of application of the recipes are significantly different and it is not sufficiently accurate to represent process cost by a constant factor independent of applied depth or dyeing-cycle profile. For example, depending on the dyeing cycle and machine selected, a pale depth costing perhaps 5 pl kg for the dyes required would demand a further 10–25 p/kg in process costs, even if no shading problems arose. The processing ‘surcharge’ on a 50 p/kg full-depth recipe would amount to 15–40 p/kg dyed faultlessly. The cost-effectiveness of the process is adversely affected if shading and reprocessing become necessary. Shading corrections to a pastel dyeing may be almost negligible in terms of dye cost but an extra 10–15% in processing cost is registered each time. A substantial addition to a full-depth dyeing from a fresh bath is a severe penalty, however, amounting to a major proportion of the cost of the original dyeing. The contribution of the fixed-cost component to the total costs of dyeing can become a heavy burden if the productive capacity is not being effectively used. It makes no economic sense to replace an unsophisticated but still viable winch by a costly jet machine unless the available workload and technological control systems can justify this decision by ensuring that the new machine is kept full and effectively occupied throughout its productive life. The relationships between these factors are illustrated by means of typical examples, based as far as possible on realistic recipe costs and process details.  相似文献   

16.
Abstract. Embedding a discrete‐time autoregressive moving average (DARMA) process in a continuous‐time ARMA (CARMA) process has been discussed by many authors. These authors have considered the relationship between the autocovariance structures of continuous‐time and related discrete‐time processes. In this article, we treat the problem from a slightly different point of view. We define embedding in a more rigid way by taking account of the probability structure. We consider Gaussian processes. First we summarize the necessary and sufficient condition for a DARMA process to be able to be embedded in a CARMA process. Secondly, we show a concrete condition such that a DARMA process can be embeddable in a CARMA process. This condition is new and general. Thirdly, we show some special cases including new examples. We show how we can examine embeddability for these special cases.  相似文献   

17.
This study presents a new approach to investigate the drying behavior and the structure of deposit resulting from drying of solid containing micro droplets. It is shown that deposit structure (porosity and “footprint”) depends on drying conditions. This dependency may contribute to better understanding of particle‐forming processes, such as fluidized bed coating. In the framework of this study, sessile droplets containing sodium benzoate dissolved in water were dried on thin glass plates in a small drying chamber. The drying conditions (temperature, moisture content and flow rate of drying gas) and material parameters (solid content of solution) were systematically varied. The drying rate of droplets was determined from the moisture balance of the drying gas. The final three‐dimensional shape of dried sessile droplets was measured using white‐light interferometry and transformed into a two‐dimensional profile using a Monte Carlo method. Moreover the mean porosity of dried droplets was calculated. By comparison of structural information and process conditions it is shown that the drying process may have a large influence on deposit structure. © 2018 American Institute of Chemical Engineers AIChE J, 64: 2002–2016, 2018  相似文献   

18.
19.
A new two-constant theory for colour matching has been developed based on the Kubelka–Munk theory. Colorant formulations and algorithms for matching tristimulus, K / S and reflectance values of a standard are presented based on the new theory. The algorithms are suitable for a single-constant theory as well as a two-constant theory. The experimental data show that the recipes predicted by the new two-constant theory are closer to the actual recipes of the standard sample than the recipes predicted by the single-constant theory, and also show smaller colour difference values for some disperse dyes. The results show that the scattering of some disperse dyes cannot be negligible, and that the recipes that match to textiles coloured by these disperse dyes should be predicted using the new two-constant theory.  相似文献   

20.
This work presents the application of nonlinear model predictive control (NMPC) to a simulated industrial batch reactor subject to safety constraint due to reactor level swelling, which can occur with relatively fast dynamics. Uncertainties in the implementation of recipes in batch process operation are of significant industrial relevance. The paper describes a novel control-relevant formulation of the excessive liquid rise problem for a two-phase batch reactor subject to recipe uncertainties. The control simulations are carried out using a dedicated NMPC and optimization software toolbox OptCon which implements efficient numerical algorithms. The open-loop optimal control problem is computed using the multiple-shooting technique and the arising nonlinear programming problem is solved using a sequential quadratic programming (SQP) algorithm tailored for large-scale problems, based on the freeware optimization environment HQP. The fast response of the NMPC controller is guaranteed by the initial value embedding and real-time iteration technologies. It is concluded that the OptCon implementation allows small sampling times and the controller is able to maintain safe and optimal operation conditions, with good control performance despite significant uncertainties in the implementation of the batch recipe.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号