首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 5 毫秒
1.
In this work a new approach for parameter estimation which is based upon decomposing the problem into two subproblems is proposed, the first subproblem generates an Artificial Neural Network (ANN) model from the given data and then the second subproblem uses the ANN model to obtain an estimate of the parameters. The analytical derivates from the ANN model obtained from the first subproblem are used for obtaining the differential terms in the formulation of the second subproblem. This greatly simplifies the parameter estimation problem. The key advantage of the proposed approach is that solution of a large optimization problem requiring high computational resources is avoided and instead two smaller problems are solved. This approach is particularly useful for large and noisy data sets and nonlinear models where ANN models are known to perform quite well and therefore plays an important role in the solution of the overall parameter estimation problem.  相似文献   

2.
Nonlinear kinetic parameter estimation using simulated annealing   总被引:2,自引:0,他引:2  
The performance of simulated annealing (S-A) in nonlinear kinetic parameter estimation was studied and compared with the classical Levenberg–Marquardt (L–M) algorithm. Both methods were tested in the estimation of kinetic parameters using a set of three kinetic models of progressively higher complexity. The models describe the catalytic wet air oxidation of phenol carried out in a small-scale trickle bed reactor. The first model only considered the phenol disappearance reaction, while the other two included oxidation intermediate compounds. The number of model parameters involved increased from 3 to 23 and 38, respectively, for the three models. Both algorithms gave good results for the first model, although the L–M was superior in terms of computation time. In the second case the algorithms achieved convergence, but S-A resulted in a better criterion and kinetic parameters with physical meaning. In the more complex model, only S-A was capable of achieving convergence, whereas the L–M failed. For the second and third model the solution of S-A could be further improved, when used as an initial guess for the L–M algorithm.  相似文献   

3.
The generalized delta rule (GDR) algorithm with generalized predictive control (GPC) control was implemented experimentally to track the temperature on a set point in a batch, jacketed polymerization reactor. An equation for optimal temperature was obtained by using co-state Hamiltonian and model equations. To track the calculated optimal temperature profiles, controller used should act smoothly and precisely as much as possible. Experimental application was achieved to obtain the desired comparison. In the design of this control system, the reactor filled with styrene-toluene mixture is considered as a heat exchanger. When the reactor is heated by means of an immersed heater, cooling water is passed through the reactor-cooling jacket. So the cooling water absorbs the heat given out by the heater. If this is taken into consideration, this reactor can be considered to be continuous in terms of energy. When such a mixing chamber was used as a polymer reactor with defined values of heat input and cooling flow rate, system can reach the steady-state condition. The heat released during the reaction was accepted as a disturbance for the heat exchanger. Heat input from the immersed heater is chosen as a manipulated variable. The neural network model based on the relation between the reactor temperature and heat input to the reactor is used. The performance results of GDR with GPC were compared with the results obtained by using nonlinear GPC with NARMAX model.The reactor temperature closely follows the optimal trajectory. And then molecular weight, experimental conversion and chain lengths are obtained for GDR with GPC.  相似文献   

4.
A dynamic model for continuous ethylene-propylene-diene terpolymerization reactors in which crosslinking and gel formation are attributable to reactions between the pendant double bonds of diene units has been developed. The model is applicable to other types of crosslinking reactions such as those due to aging, polymer blending, and vulcanization. The polymer properties at the gel point and in the post-gel region are computed using the numerical fractionation method. Direct application of this method to the prediction of terpolymer properties in the gel or post-gel region can lead to severe numerical problems, due to large differences in order of magnitude of various moments across the generations. These problems are overcome by applying a pseudo-kinetic rate constant method, i.e., by constructing a moment model for a pseudo-homopolymer that approximates the behavior of the actual terpolymer under the long chain and quasi-steady state assumptions. The pseudo-homopolymer model is then used as the basis for application of the numerical fractionation method. We show that the proposed dynamic model is capable of predicting realistic polydispersities and molecular weight distributions even near the gel point with as few as 11 generations, and in the post-gel region with as few as five generations. The largest steady-state polydispersities of the soluble polymer are obtained when the crosslinking rate just exceeds the critical value for gelation. The steady-state polydispersity decreases exponentially in the post-gel region at higher values of the rate constant, while the sol fraction decreases in a more linear fashion. The overall molecular weight distribution (MWD) of the sol is constructed assuming a Schulz two parameter distribution for each generation. For the industrial case of a small number of crosslinks, the first two generations contribute the most to the MWD, which is unimodal. The tail of the MWD is longest near the initial gelation time; the tail is shortened in the post-gel region as higher generations are consumed.  相似文献   

5.
The complex flow patterns induced in fluidized bed catalytic reactors and the competing parameters affecting the mass and heat transfer characteristics make the design of such reactors a challenging task to accomplish. The models of such processes rely heavily on predictive empirical correlations for the mass and heat transfer coefficients. Unfortunately, published empirical-based correlations have the common shortcoming of low prediction efficiency compared with experimental data. In this work, an artificial neural network approach is used to capture the reactor characteristics in terms of heat and mass transfer based on published experimental data. The developed ANN-based heat and mass transfer coefficients relations were used in a conventional FCR model and simulated under industrial operating conditions. The hybrid model predictions of the melt-flow index and the emulsion temperature were compared to industrial measurements as well as published models. The predictive quality of the hybrid model was superior to other models. This modeling approach can be used as an alternative to conventional modeling methods.  相似文献   

6.
The development of predictive models is a time consuming, knowledge intensive, iterative process where an approximate model is proposed to explain experimental data, the model parameters that best fit the data are determined and the model is subsequently refined to improve its predictive capabilities. Ascertaining the validity of the proposed model is based upon how thoroughly the parameter search has been conducted in the allowable range. The determination of the optimal model parameters is complicated by the complexity/non-linearity of the model, potentially large number of equations and parameters, poor quality of the data, and lack of tight bounds for the parameter ranges. In this paper, we will critically evaluate a hybrid search procedure that employs a genetic algorithm for identifying promising regions of the solution space followed by the use of an optimizer to search locally in the identified regions. It has been found that this procedure is capable of identifying solutions that are essentially equivalent to the global optimum reported by a state-of-the-art global optimizer but much faster. A 13 parameter model that results in 60 differential-algebraic equations for propane aromatization on a zeolite catalyst is proposed as a more challenging test case to validate this algorithm. This hybrid technique has been able to locate multiple solutions that are nearly as good with respect to the “sum of squares” error criterion, but imply significantly different physical situations.  相似文献   

7.
In complex reaction systems, such as those found in heterogeneous catalytic reactions, several alternative kinetic models are usually considered in an effort to describe reaction kinetics. The number of plausible mechanisms can be very large, even for systems with a small number of reactions and components. Usually, only a restricted number of models are investigated in detail, since the evaluation of a large number of complex models is extremely time-consuming. In this work, a methodology is described, which allows performing efficiently a global search within all plausible models and parameter sets using the Non-dominated Sorting Genetic Algorithm II (NSGA-II). The developed methodology is applied to the parameter estimation and model optimization of the partial oxidation of ethane reaction network. The present approach allows the reliable investigation of a considerable number of models mechanisms in an automatic manner and in a short computational time. It appears to be a very effective way to optimize complex reaction mechanisms.  相似文献   

8.
Hydrodesulfurization (HDS) of crude oil has not been reported widely in the literature and it is one of the most challenging tasks in the petroleum refining industry. In order to obtain useful models for HDS process that can be confidently applied to reactor design, operation and control, the accurate estimation of kinetic parameters of the relevant reaction scheme are required. In this work, an optimization technique is used in order to obtain the best values of kinetic parameters in trickle-bed reactor (TBR) process used for hydrodesulfurization (HDS) of crude oil based on pilot plant experiment. The optimization technique is based on minimization of the sum of the square errors (SSE) between the experimental and predicted concentrations of sulfur compound in the products using two approaches (linear (LN) and non-linear (NLN) regressions).A set of experiments were carried out in a continuous flow isothermal trickle-bed reactor using crude oil as a feedstock and the commercial cobalt–molybdenum on alumina (Co–Mo/γ-Al2O3) as a catalyst. The reactor temperature was varied from 335 to 400 °C, the hydrogen pressure from 4 to 10 MPa and the liquid hourly space velocity (LHSV) from 0.5 to 1.5 h−1, keeping constant hydrogen to oil ratio (H2/oil) at 250 L/L.A steady-state heterogeneous model is developed based on two-film theory, which includes mass transfer phenomena in addition to many correlations for estimating physiochemical properties of the compounds. The hydrodesulfurization reaction is described by Langmuir–Hinshelwood kinetics. gPROMS software is employed for modelling, parameter estimation and simulation of hydrodesulfurization of crude oil in this work. The model simulations results were found to agree well with the experiments carried out in a wide range of the studied operating conditions. Following the parameter estimation, the model is used to predict the concentration profiles of hydrogen, hydrogen sulfide and sulfur along the catalyst bed length in gas, liquid and solid phase, which provides further insight of the process.  相似文献   

9.
Gross error detection is crucial for data reconciliation and parameter estimation, as gross errors can severely bias the estimates and the reconciled data. Robust estimators significantly reduce the effect of gross errors (or outliers) and yield less biased estimates. An important class of robust estimators are maximum likelihood estimators or M-estimators. These are commonly of two types, Huber estimators and Hampel estimators. The former significantly reduces the effect of large outliers whereas the latter nullifies their effect. In particular, these two estimators can be evaluated through the use of an influence function, which quantifies the effect of an observation on the estimated statistic. Here, the influence function must be bounded and finite for an estimator to be robust. For the Hampel estimators the influence function becomes zero for large outliers, nullifying their effect. On the other hand, Huber estimators do not reject large outliers; their influence function is simply bounded. As a result, we consider the three part redescending estimator of Hampel and compare its performance with a Huber estimator, the Fair function. A major advantage to redescending estimators is that it is easy to identify outliers without having to perform any exploratory data analysis on the residuals of regression. Instead, the outliers are simply the rejected observations. In this study, the redescending estimators are also tuned to the particular observed system data through an iterative procedure based on the Akaike information criterion, (AIC). This approach is not easily afforded by the Huber estimators and this can have a significant impact on the estimation. The resulting approach is incorporated within an efficient non-linear programming algorithm. Finally, all of these features are demonstrated on a number of process and literature examples for data reconciliation.  相似文献   

10.
Various aspects of the use of extended Kalman Filters for tracking the states of continuous emulsion polymerization reactors are investigated. The importance of introducing meaningful nonstationary stochastic states to account for unknown impurities, initiator efficiencies, modelling errors, etc. is illustrated. The robustness of these state estimators to unmodelled and unmeasured disturbances, to modelling errors, and to input errors is evaluated. A procedure for selecting an optimal set of on-line sensors is presented. The emulsion polymerization of styrene-butadiene rubber (SBR) is used as the example system.  相似文献   

11.
A mathematical model of the hydrophobic adsorption chromatographic separation of wood model constituents has been developed. Veratryl alcohol was selected to illustrate a lignin molecule and salicin was selected to illustrate a lignin–carbohydrate complex. A variety of available experimental methods in combination with parameter fitting was used to estimate the parameters of packed bed porosity, axial dispersion, film mass transfer, diffusivities and adsorption equilibria with a phenylic silica stationary phase. The model was verified to simulate the separation to within an accuracy of 95%. The model was, however, unable to predict the phenomenon of elution curve fronting, caused by the channeling of the packed bed.  相似文献   

12.
Continuous polymerization processes have advantages when large amounts of product are required; moreover, higher quality can be obtained because of the elimination of variability between batches. Tubular reactors are economically attractive because of their simple geometry and high heat exchange area; however, they are not commonly used for commercial purposes, mainly because of the large radial profiles. This study elucidates the operation of this kind of reactors in three different ways: first a detailed two‐dimensional mathematical model was developed, in which a complete visualization of all axial and radial profiles is possible, allowing a safe analysis at different operating conditions. In a second step a system composed of a continuously stirred tank reactor in series with a tubular reactor was used. A reduction in radial profiles can be clearly observed when prepolymerization is taken into account, improving both the homogeneity and the end properties of the polymer. In a third approach neural networks (NNs) were used in parallel with a one‐dimensional model. The objective of this study was to illustrate how NNs can improve the prediction capability when it is not possible to build a reliable model because of uncertainties in parameters and incomplete knowledge of the system. The NNs generated good results, showing that the hybrid model was able to accurately simulate the reactor, even when uncertainty in kinetic and diffusional parameters was imposed to the model. © 2003 Wiley Periodicals, Inc. J Appl Polym Sci 91: 871–882, 2004  相似文献   

13.
Among several treatment methods catalytic wet air oxidation (CWAO) treatment is considered as a useful and powerful method for removing phenol from waste waters. In this work, mathematical model of a trickle bed reactor (TBR) undergoing CWAO of phenol is developed and the best kinetic parameters of the relevant reaction are estimated based on experimental data (from the literature) using parameter estimation technique. The validated model is then utilized for further simulation and optimization of the process. Finally, the TBR is scaled up to predict the behavior of CWAO of phenol in industrial reactors. The optimal operating conditions based on maximum conversion and minimum cost in addition to the optimal distribution of the catalyst bed is considered in scaling up and the optimal ratio of the reactor length to reactor diameter is calculated with taking into account the hydrodynamic factors (radial and axial concentration and temperature distribution).  相似文献   

14.
Inspired by the evolutionary strategy and the biological DNA mechanism, a hybrid DNA based genetic algorithm (HDNA-GA) with the population update operation and the adaptive parameter scope operation is proposed for solving parameter estimation problems of dynamic systems. The HDNA-GA adopts the nucleotides based coding and some molecular operations. In HDNA-GA, three new crossover operators, replacement operator, transposition operator and reconstruction operator, are designed to improve the population diversity, and the mutation operator with adaptive mutation probability is applied to guarantee against stalling at local peak. Besides, the simulated annealing based selection operator is used to guide the evolution direction. In order to overcome the premature convergence drawbacks of GAs and enhance the algorithm global and local search abilities, the population update operator and the adaptive parameter scope operator are suggested. Numerous comparative experiments on benchmark functions and real-world parameter estimation problems in dynamic systems are conducted and the results demonstrate the effectiveness and efficiency of the HDNA-GA.  相似文献   

15.
Hybrid modeling approaches have recently been investigated as an attractive alternative to model fermentation processes. Normally, these approaches require estimation data to train the empirical model part of a hybrid model. This may result in decreasing the generalization ability of the derived hybrid model. Therefore, a simulta-neous hybrid modeling approach is presented in this paper. It transforms the training of the empirical model part into a dynamic system parameter identification problem, and thus al ows training the empirical model part with only measured data. An adaptive escaping particle swarm optimization (AEPSO) algorithm with escaping and adaptive inertia weight adjustment strategies is constructed to solve the resulting parameter identification problem, and thereby accomplish the training of the empirical model part. The uniform design method is used to determine the empirical model structure. The proposed simultaneous hybrid modeling approach has been used in a lab-scale nosiheptide batch fermentation process. The results show that it is effective and leads to a more consistent model with better generalization ability when compared to existing ones. The performance of AEPSO is also demonstrated.  相似文献   

16.
A moving horizon estimation (MHE) approach to simultaneously estimate states and parameters is revisited. Two different noise models are considered, one with measurement noise and one with additional state noise. The contribution of this article is twofold. First, we transfer the real-time iteration approach, developed in Diehl et al. (2002) for nonlinear model predictive control, to the MHE approach to render it real-time feasible. The scheme reduces the computational burden to one iteration per measurement sample and separates each iteration into a preparation and an estimation phase. This drastically reduces the time between measurements and computed estimates. Secondly, we derive a numerically efficient arrival cost update scheme based on one single QR-factorization. The MHE algorithm is demonstrated on two chemical engineering problems, a thermally coupled distillation column and the Tennessee Eastman benchmark problem, and compared against an Extended Kalman Filter. The CPU times demonstrate the real-time applicability of the suggested approach.  相似文献   

17.
This work presents a comprehensive steady‐state model of the high‐pressure ethylene polymerization in a tubular reactor able to calculate the complete molecular weight distribution (MWD). For this purpose, the probability generating function technique is employed. The model is included in an optimization framework, which is used to determine optimal reactor designs and operating conditions for producing a polymer with tailored MWD. Two application examples are presented. The first one involves maximization of conversion to obtain a given MWD, typical of industrial operation. Excellent agreement between the resulting MWD and the target one is achieved with a conversion about 5% higher than the ones commonly reported for this type of reactor. The second example consists in finding the design and operating conditions necessary to produce a polymer with a bimodal MWD. The optimal design for this case involves a split of the initiator, monomer, and modifier feeds between the main stream and two lateral injections. To the best of our knowledge, this is the first work dealing with the optimization of this process in which a tailored shape for the MWD is included. © 2007 Wiley Periodicals, Inc. J Appl Polym Sci, 2007  相似文献   

18.
In the current study a parameter estimation method based on data screening by sensitivity analysis is presented. The method applied Multivariate Data Analysis (MVDA) on a large transient data set to select different subsets on which parameters estimation was performed. The subset was continuously updated as the parameter values developed using Principal Component Analysis (PCA) and D-optimal onion design. The measurement data was taken from a Diesel Oxidation Catalyst (DOC) connected to a full scale engine rig and both kinetic and mass transport parameters were estimated. The methodology was compared to a conventional parameter estimation method and it was concluded that the proposed method achieved a 32% lower residual sum of squares but also that it displayed less tendencies to converge to a local minima. The computational time was however significantly longer for the evaluated method.  相似文献   

19.
A model for a spiral-wound reverse osmosis system using the three-parameter membrane transport model by Spiegler-Kedem is presented. The pressure drops in the permeate channel, feed channel and also the variation of the mass transfer coefficient along the feed channel were taken into account. An analytical solution was not possible due to the large number of nonlinear model equations; therefore, a computer solution utilizing finite differences was employed. The data generated by the simulation of the proposed model clearly indicate that neglecting the variation in the mass transfer coefficient and pressure drop along the flow channels can lead to errors in permeate concentration, though the effect on permeate flow rate may not be significant. The significance of the reflection coefficient in the membrane transport model was also investigated. A method for estimation of the model parameters is also presented; previously reported experimental data were analyzed. Using this parameter-estimation program, a correlation for the mass transfer coefficient in the feed channel is proposed and compared with the correlation available in the literature.  相似文献   

20.
Multi-scenario optimization is a convenient way to formulate and solve multi-set parameter estimation problems that arise from errors-in-variables-measured (EVM) formulations. These large-scale problems lead to nonlinear programs (NLPs) with specialized structure that can be exploited by the NLP solver in order to obtained more efficient solutions. Here we adapt the IPOPT barrier nonlinear programming algorithm to provide efficient parallel solution of multi-scenario problems. The recently developed object oriented framework, IPOPT 3.2, has been specifically designed to allow specialized linear algebra in order to exploit problem specific structure. This study discusses high-level design principles of IPOPT 3.2 and develops a parallel Schur complement decomposition approach for large-scale multi-scenario optimization problems. A large-scale case study example for the identification of an industrial low-density polyethylene (LDPE) reactor model is presented. The effectiveness of the approach is demonstrated through the solution of parameter estimation problems with over 4100 ordinary differential equations, 16,000 algebraic equations and 2100 degrees of freedom in a distributed cluster.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号