首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Error-in-variables model (EVM) methods require information about variances of input and output measured variables when estimating the parameters in mathematical models for chemical processes. In EVM, using replicate experiments for estimating output measurement variances is complicated because true values of inputs may be different when multiple attempts are made to repeat an experiment. To address this issue, we categorize attempted replicate experiments as: (i) true replicates (TRs) when uncertain inputs are the same in replicated runs and (ii) pseudo replicates (PRs) when measured inputs are the same, but unknown true values of inputs are different. We propose methodologies to obtain output measurement variance estimates and associated parameter estimates for both situations. We also propose bootstrap methods for obtaining joint-confidence information for the resulting parameter estimates. A copolymerization case study is used to illustrate the proposed techniques. We show that different assumptions noticeably affect the uncertainties in the resulting reactivity-ratio estimates.  相似文献   

2.
Abstract.  Prediction intervals in state–space models can be obtained by assuming Gaussian innovations and using the prediction equations of the Kalman filter, with the true parameters substituted by consistent estimates. This approach has two limitations. First, it does not incorporate the uncertainty caused by parameter estimation. Second, the Gaussianity of future innovations assumption may be inaccurate. To overcome these drawbacks, Wall and Stoffer [ Journal of Time Series Analysis (2002) Vol. 23, pp. 733–751] propose a bootstrap procedure for evaluating conditional forecast errors that requires the backward representation of the model. Obtaining this representation increases the complexity of the procedure and limits its implementation to models for which it exists. In this article, we propose a bootstrap procedure for constructing prediction intervals directly for the observations, which does not need the backward representation of the model. Consequently, its application is much simpler, without losing the good behaviour of bootstrap prediction intervals. We study its finite-sample properties and compare them with those of the standard and the Wall and Stoffer procedures for the local level model. Finally, we illustrate the results by implementing the new procedure to obtain prediction intervals for future values of a real time series.  相似文献   

3.
A common approach in fault diagnosis is monitoring the deviations of measured variables from the values at normal operations to identify the root causes of faults. When the number of conceivable faults is larger than that of predictive variables, conventional approaches can yield ambiguous diagnosis results including multiple fault candidates. To address the issue, this work proposes a fault magnitude based strategy. Signed digraph is first used to identify qualitative relationships between process variables and faults. Empirical models for predicting process variables under assumed faults are then constructed with support vector regression (SVR). Fault magnitude data are projected onto principal components subspace, and the mapping from scores to fault magnitudes is learned via SVR. This model can estimate fault magnitudes and discriminate a true fault among multiple candidates when different fault magnitudes yield distinguishable responses in the monitored variables. The efficacy of the proposed approach is illustrated on an actuator benchmark problem.  相似文献   

4.
In this paper, we show that in finding a mathematical expression to predict the relationship between temperatures measured inside a multi-component distillation column and the quality of the produced product at the top of the column, the application of a recently developed systematic procedure to identify Wiener nonlinear systems [20], supports the user in retrieving from the data accurate information about both the structure and initial parameter estimates of the model to be identified with iterative parameter optimization methods. This property enables the user to improve his prior knowledge instead of being dependent on it for getting parameter estimates as is the case in most existing parametric identification methods, A consequence of this dependency is that wrong prior information leads to models with poor prediction capability on one hand, and very little information on the other hand on how to modify the model structure in order to get improved results. The latter often results in very time consuming “trial-and-error” approaches that furthermore may yield poor results because of the possibility of getting stuck in local minima. The outlined approach has the potential to overcome these drawbacks. One common source of the use of wrong prior knowledge in the identification of multi-component distillation columns is the presence of a static nonlinearity of exponential type that can be removed by taking the logarithm of the measured product quality. It is shown that this “trick” to linearize the system decreases the accuracy of the predicted producted quality. The outlined approach is also compared to a simple NARX neural network black-box identification method that have the potential to approximate general nonlinear input-output behaviours. This comparison shows that the neural network approach easily requires twice as much observations compared to the Wiener identification approach applied in this paper when the variance of the predicted product quality needs to be the same.

The real-life measurement used in this paper were collected at a refinery of the Dutch State Mines (DSM).

Finally, in order to use the model obtained with one (training) data set under other operational conditions, that is to extrapolate the model a simple observer design is discussed and validated with real-life measurements.  相似文献   

5.
Reaction rate equations with coefficients that have an Arrhenius dependence on temperature require nonlinear procedures to obtain parameter estimates. Estimates are important, but of equal importance are their measures of plausibility. The simplest measures, in the form (estimate ± limits), are based on linear approximations, which can be, and often are, highly misleading. But there is no need to use approximations because modern statistical profiling techniques can produce accurate intervals very efficiently. Profiling also provides valuable insight into the estimation situation by revealing how models can be simplified. Strategies are given for model reformulation and parameter transformation to produce models with well-behaved estimates.  相似文献   

6.
A new method of selecting smoothing spline functions is presented and used to curve fit experimental fermentation data. From the curves generated, estimates for the specific growth rate, the specific rate of substrate utilization and the biomass energetic yield are made. Selection of the smoothness parameters is based on minimization of a response surface fit to the smoothness parameters used to generate the time profiles of biomass and substrate concentration. The response modelled is the extent of closure of the carbon balance and the available electron balance. Results obtained using cross validation for selection of the smoothness parameter are also presented, and compared to the results calculated using the response surface technique. Estimates made for specific growth rate and biomass energetic yield are used with the covariate adjustment procedure to calculate point and 95% confidence interval estimates for the true biomass energetic yield, ηmax, and the maintenance coefficient, me. The results show that the spline fit has an effect on the parameter estimates. The newly developed response surface method compares favorably with the cross validation method.  相似文献   

7.
支持向量回归在乙烯裂解产物收率软测量中的应用   总被引:4,自引:3,他引:1       下载免费PDF全文
乙烯裂解产物收率的实时预报对于裂解炉的生产具有重要意义。针对有效的样本数据较少的问题,采用支持向量回归方法建立裂解产物收率的软测量模型。对于支持向量机中模型参数的选取,采用了微粒群优化算法进行参数寻优,提高了建模效率和模型精度。基于现场数据的建模实验结果表明,基于支持向量回归方法的乙烯裂解产物收率软测量模型预报精度较高,趋势跟踪性能良好。  相似文献   

8.
This paper presents a general method for estimating model parameters from experimental data when the model relating the parameters and input variables to the output responses is a Monte Carlo simulation. From a statistical point of view a Bayesian approach is used in which the distribution of the parameters is handled in discretized form as elements of an array in computer storage. The stochastic nature of the Monte Carlo model allows only an estimate of the distribution to be calculated from which the true distribution must then be estimated. For this purpose an exponentiated polynomial function has been found to be useful. The method provides point estimates as well as joint probability regions. Marginal distributions and distributions of functions of the parameters can also be handled. The motivation for exploring this alternative parameter estimation technique comes from the recognition that for some systems, particularly when the underlying process is stochastic in nature, Monte Carlo simulation often is the most suitable way of modelling. As such, the Monte Carlo approach increases the range of problems which can be handled by mathematical modelling. The technique is applied to the modelling of binary copolymerization. Two models, the Mayo-Lewis and the Penultimate Group Effects models, are considered and a method for discriminating between these models in the light of sequence distribution data is proposed.  相似文献   

9.
Soft-sensing is widely used in industrial applications. The traditional soft-sensing structure is open-loop without correction mechanism. If the working condition is changed or there is unknown disturbance, the forecast result of soft-sensing model may be incorrect. In order to obtain accurate values, it is necessary to carry out online correction. In this paper, a semiclosed-loop framework (SLF) is proposed to establish a soft-sensing approach, which estimates the input variables in the next moment by a prediction model and calibrates the output variables by a compensation model. The experimental results show that the proposed method has better prediction accuracy and robustness than other open-loop models.  相似文献   

10.
The cure kinetics of a phenylethynyl‐terminated poly(etherimide) are examined via differential scanning calorimetry (DSC) measurements. Isothermal holds at temperatures ranging from 325°C to 360°C provided the necessary information to develop reaction kinetics models using both first‐order reaction kinetics and combination reaction kinetics. The first‐order reaction kinetics model works well for quick estimates of degree of cure versus time over the experimental temperature range. However, significantly more accurate predictions of degree of cure versus time were provided by the combination reaction kinetics model. The lack of accuracy in the first order model is due to the fact that the reaction cannot be described by a simple order. The experimental procedures followed to obtain the cure kinetics data and the construction of the models from this data are described; the predictions of these models are compared with the experimental results.  相似文献   

11.
Various short cut procedures for estimation of parameters in mathematical models given in the form of differential equations are compared with each other as well as with the maximum likelihood (ML) approach. Points covered are: computing time; programming effort; deviation of short cut estimates from ML estimates; and convergence of a ML approach following-up the short cut technique.  相似文献   

12.
13.
In industry, it may be difficult in many applications to obtain a first‐principles model of the process, in which case a linear empirical model constructed using process data may be used in the design of a feedback controller. However, linear empirical models may not capture the nonlinear dynamics over a wide region of state‐space and may also perform poorly when significant plant variations and disturbances occur. In the present work, an error‐triggered on‐line model identification approach is introduced for closed‐loop systems under model‐based feedback control strategies. The linear models are re‐identified on‐line when significant prediction errors occur. A moving horizon error detector is used to quantify the model accuracy and to trigger the model re‐identification on‐line when necessary. The proposed approach is demonstrated through two chemical process examples using a model‐based feedback control strategy termed Lyapunov‐based economic model predictive control (LEMPC). The chemical process examples illustrate that the proposed error‐triggered on‐line model identification strategy can be used to obtain more accurate state predictions to improve process economics while maintaining closed‐loop stability of the process under LEMPC. © 2016 American Institute of Chemical Engineers AIChE J, 63: 949–966, 2017  相似文献   

14.
This article presents a model‐based control approach for optimal operation of a seeded fed‐batch evaporative crystallizer. Various direct optimization strategies, namely, single shooting, multiple shooting, and simultaneous strategies, are used to examine real‐time implementation of the control approach on a semi‐industrial crystallizer. The dynamic optimizer utilizes a nonlinear moment model for on‐line computation of the optimal operating policy. An extended Luenberger‐type observer is designed to enable closed‐loop implementation of the dynamic optimizer. In addition, the observer estimates the unmeasured process variable, namely, the solute concentration, which is essential for the intended control application. The model‐based control approach aims to maximize the batch productivity, as satisfying the product quality requirements. Optimal control of crystal growth rate is the key to fulfill this objective. This is due to the close relation of the crystal growth rate to product attributes and batch productivity. The experimental results suggest that real‐time application of the control approach leads to a substantial increase, i.e., up to 30%, in the batch productivity. The reproducibility of batch runs with respect to the product crystal size distribution is achieved by thorough seeding. The simulation and experimental results indicate that the direct optimization strategies perform similarly in terms of optimal process operation. However, the single shooting strategy is computationally more expensive. © 2010 American Institute of Chemical Engineers AIChE J, 57: 1557–1569, 2011  相似文献   

15.
This article develops asymptotic theory for estimation of parameters in regression models for binomial response time series where serial dependence is present through a latent process. Use of generalized linear model estimating equations leads to asymptotically biased estimates of regression coefficients for binomial responses. An alternative is to use marginal likelihood, in which the variance of the latent process but not the serial dependence is accounted for. In practice, this is equivalent to using generalized linear mixed model estimation procedures treating the observations as independent with a random effect on the intercept term in the regression model. We prove that this method leads to consistent and asymptotically normal estimates even if there is an autocorrelated latent process. Simulations suggest that the use of marginal likelihood can lead to generalized linear model estimates result. This problem reduces rapidly with increasing number of binomial trials at each time point, but for binary data, the chance of it can remain over 45% even in very long time series. We provide a combination of theoretical and heuristic explanations for this phenomenon in terms of the properties of the regression component of the model, and these can be used to guide application of the method in practice.  相似文献   

16.
A sequential design strategy for selecting experimental runs to obtain model discrimination and precise parameter estimation is tested via a simulation study of propylene oxidation kinetics. The strategy is used to design all runs including the preliminary ones which were arbitrarily chosen by earlier researchers. To design initial runs, crude initial parameter guesses may be used in the rival models until least squares estimates can be calculated. Even under conditions of very bad initial guesses and high error variances, this procedure selects whichever model is the correct one and estimates with precision its parameters, in fewer runs than previously reported.  相似文献   

17.
Abstract. Two frequency-domain methods of estimation of the parameters of linear time series models–one based on maximum likelihood, called the 'Whittle criterion', and the other based on least squares, called the 'Taniguchi criterion'–are discussed in this paper. A heuristic justification for their use in models such as bilinear models is given. The estimation theory and associated asymptotic theory of these methods are numerically illustrated for the bilinear model BL( p ,0, p , 1). For that purpose, an approach based on the calculus of Kronecker product matrices is used to obtain the derivatives of the spectral density function of the state-space form of the model.  相似文献   

18.
In a time‐series regression setup, multinomial responses along with time dependent observable covariates are usually modelled by certain suitable dynamic multinomial logistic probabilities. Frequently, the time‐dependent covariates are treated as a realization of an exogenous random process and one is interested in the estimation of both the regression and the dynamic dependence parameters conditional on this realization of the covariate process. There exists a partial likelihood estimation approach able to deal with the general dependence structures arising from the influence of both past covariates and past multinomial responses on the covariates at a given time by sequentially conditioning on the history of the joint process (response and covariates), but it provides standard errors for the estimators based on the observed information matrix, because such a matrix happens to be the Fisher information matrix obtained by conditioning on the whole history of the joint process. This limitation of the partial likelihood approach holds even if the covariate history is not influeced by lagged response outcomes. In this article, a general formulation of the auto‐covariance structure of a multinomial time series is presented and used to derive an explicit expression for the Fisher information matrix conditional on the covariate history, providing the possibility of computing the variance of the maximum likelihood estimators given a realization of the covariate process for the multinomial‐logistic model. The difference between the standard errors of the parameter estimators under these two conditioning schemes (covariates Vs. joint history) is illustrated through an intensive simulation study based on the premise of an exogenous covariate process.  相似文献   

19.
The determination of particle size distributions from e.g. DMPS or TDMA measurements is an ill-posed problem that is often solved using regularization methods. With Tikhonov-type regularization the standard smoothness assumptions may yield infeasible estimates for the size distribution in cases where the size distribution contains narrow peaks. In this paper we propose a method for the estimation of size distributions with narrow peaks. The method is based on the utilization of a weight function to modify the standard smoothness constraints to obtain non-homogeneous smoothing effect. In the problem formulation prior assumptions about the weight function are exploited and the size distribution and the weight function can be estimated simultaneously. The performance of the method is evaluated with simulated TDMA data and the results show that good agreement between the estimates and the true size distributions can be achieved.  相似文献   

20.
Analysis of the growth of Trichosporon cutaneum on glucose was carried out using the concepts of material and energy regularities. Carbon and available electron balances were used to check the consistency of the measured data and to detect outlying data points. Combined point and interval estimates of the true biomass energetic yield and maintenance coefficient for T. cutaneum were obtained using a multivariate statistical procedure. The analysis shows that T. cutaneum has a higher true biomass energetic yield and a lower maintenance coefficient than Candida utilis, which has been found suitable for SCP production.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号