首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The application of nonlinear dynamic data reconciliation to plant data   总被引:3,自引:0,他引:3  
We have extended a fairly comprehensive data reconciliation approach called nonlinear dynamic data reconciliation (NDDR) that was originally presented by Liebman et al. (1994, Comput. Chem. Engng, 16, 963–986). This approach is capable of reconciling data from both steady-state and dynamic processes as well as estimating parameters and unmeasured process variables. One recently added feature is the ability to detect measurement bias. Each of these features were developed and tested using computer simulation. In this paper we report the successful application of NDDR to reconcile actual plant data from an Exxon Chemicals process.  相似文献   

2.
Outlier in one variable will smear the estimation of other measurements in data reconciliation (DR). In this article, a novel robust method is proposed for nonlinear dynamic data reconciliation, to reduce the influence of outliers on the result of DR. This method introduces a penalty function matrix in a conventional least-square objective function, to assign small weights for outliers and large weights for normal measurements. To avoid the loss of data information, element-wise Mahalanobis distance is proposed, as an improvement on vector-wise distance, to construct a penalty function matrix. The correlation of measurement error is also considered in this article. The method introduces the robust statistical theory into conventional least square estimator by constructing the penalty weight matrix and gets not only good robustness but also simple calculation. Simulation of a continuous stirred tank reactor, verifies the effectiveness of the proposed algorithm.  相似文献   

3.
In real industrial production, many mass and heat transfer processes are influenced by high temperature, high pressure, and even strong acid or alkali conditions. In addition, some important variables cannot be measured and chemical compositions are analyzed offline with a long time delay, which leads to inaccurate measurements of the process data. In this paper, a layered data reconciliation (LDR) method based on time registration is proposed to improve the measurement accuracy and estimate unmeasured variables. Considering that the material cannot be tagged and tracked in process manufacturing, a temporal and spatial matching strategy for the process data is designed based on a time‐correlation analysis matrix which is determined to describe the correlation of each time sequence in the data matrix. Then, a layered data reconciliation model with time registration is developed by reconciling the mass balance layer and the heat balance layer separately and stepwise, and the model is solved by the state transition algorithm. Meanwhile, regular terms and engineer's knowledge are introduced into the data reconciliation model to solve the problem of insufficient redundancy. The industrial verification results from the actual industrial evaporation process indicate that the accuracy of measured values is improved by using the proposed reconciliation strategy.  相似文献   

4.
The detection of gross errors in the reconciliation of process measurement data is an important step in removing their distorting effects on the corrected data. Tests of maximum power (MP), based on the normal distribution, are known for the detection of gross errors in the measurements and for the constraints, but only for those remaining after the removal of unmeasured flows. Here, the MP tests are derived for the original constraints, which allows the direct detection of gross errors in species balances around individual process units. It is shown that the square of the MP test statistic is precisely equal to the reduction in the weighted sum of squares of the adjustments which results from the deletion of that constraint. The test is illustrated with two examples.  相似文献   

5.
过程系统的数据校正与参数估计是进行实时操作优化与过程控制的基础。过程系统变负荷下由于模型参数变化的非线性及显著误差的影响,导致数据校正与参数估计的结果不准确,从而影响实时操作优化与过程控制的效率。针对此问题,本文提出了一种用于变负荷下的数据校正与参数估计方法。此方法主要包括过程的稳态检测与数据采样,多工况下的数据聚类和基于多组测量的数据校正与参数估计。首先选择有效和可靠的过程测量数据,根据变负荷下工况的波动性与系统的非线性特征进行数据聚类,最后基于聚类结果调整模型参数使得模型输出与过程测量数据偏差最小。此方法可有效地减小模型参数变化的非线性及显著误差对数据校正与参数估计结果的影响。基于现场的测量数据,将此方法应用于空气分离流程系统中,结果显示了基于此方法的数据校正与参数估计结果更准确。  相似文献   

6.
This article describes a new framework for data reconciliation in generalized linear dynamic systems, in which the well‐known Kalman filter (KF) is inadequate for filtering. In contrast to the classical formulation, the proposed framework is in a more concise form but still remains the same filtering accuracy. This comes from the properties of linear dynamic systems and the features of the linear equality constrained least squares solution. Meanwhile, the statistical properties of the framework offer new potentials for dynamic measurement bias detection and identification techniques. On the basis of this new framework, a filtering formula is rederived directly and the generalized likelihood ratio method is modified for generalized linear dynamic systems. Simulation studies of a material network present the effects of both the techniques and emphatically demonstrate the characteristics of the identification approach. Moreover, the new framework provides some insights about the connections between linear dynamic data reconciliation, linear steady state data reconciliation, and KF. © 2009 American Institute of Chemical Engineers AIChE J, 2010  相似文献   

7.
Most experimental polymerization kinetic data are in the form of degree of polymerization versus time plots. However, to explore kinetic models it is more useful to have the data as polymerization rate versus degree of polymerization plots. Converting degree of polymerization into rate is an ill‐posed problem in that if inappropriate methods are used the noise in the data will be amplified, leading to unreliable results. This paper describes a procedure, based on Tikhonov regularization, to perform this conversion. The procedure is independent of kinetic models and keeps noise amplification under control. Its performance is demonstrated using several sets of published polymerization kinetic data. In each case the computed rates are used to determine the parameters in the rate expression proposed in the original papers. Modified rate expressions will also be explored. The ease with which such investigations can be performed highlights the advantages of this new procedure. © 2004 Wiley Periodicals, Inc. J Appl Polym Sci 94: 1625–1633, 2004  相似文献   

8.
Erroneous information from sensors affect process monitoring and control. An algorithm with multiple model identification methods will improve the sensitivity and accuracy of sensor fault detection and data reconciliation (SFD&DR). A novel SFD&DR algorithm with four types of models including outlier robust Kalman filter, locally weighted partial least squares, predictor-based subspace identification, and approximate linear dependency-based kernel recursive least squares is proposed. The residuals are further analyzed by artificial neural networks and a voting algorithm. The performance of the SFD&DR algorithm is illustrated by clinical data from artificial pancreas experiments with people with diabetes. The glucose-insulin metabolism has time-varying parameters and nonlinearities, providing a challenging system for fault detection and data reconciliation. Data from 17 clinical experiments collected over 896 h were analyzed; the results indicate that the proposed SFD&DR algorithm is capable of detecting and diagnosing sensor faults and reconciling the erroneous sensor signals with better model-estimated values. © 2018 American Institute of Chemical Engineers AIChE J, 65: 629–639, 2019  相似文献   

9.
This article describes a procedure for obtaining the partial derivatives of experimental data that depend on two independent variables. The starting equation is an ill‐posed integral equation of the first kind. Tikhonov regularization is used to keep noise amplification under control. Implementation of the computation steps is described and the performance of the procedure is demonstrated by four practical examples. © 2010 American Institute of Chemical Engineers AIChE J, 2010  相似文献   

10.
Inference of physical parameters from reference data is a well‐studied problem with many intricacies (inconsistent sets of data due to experimental systematic errors; approximate physical models…). The complexity is further increased when the inferred parameters are used to make predictions—virtual measurements—because parameter uncertainty has to be estimated in addition to parameters best value. The literature is rich in statistical models for the calibration/prediction problem, each having benefits and limitations. We review and evaluate standard and state‐of‐the‐art statistical models in a common Bayesian framework, and test them on synthetic and real datasets of temperature‐dependent viscosity for the calibration of the Lennard‐Jones parameters of a Chapman‐Enskog model. © 2017 American Institute of Chemical Engineers AIChE J, 63: 4642–4665, 2017  相似文献   

11.
Accurate and reliable determination of the linear viscoelastic relaxation spectrum is a critical step in the application of any constitutive equation. The experimental data used to determine the relaxation spectrum always include noise and are over a limited time or frequency range, both of which can affect the determination of the spectrum. Regularization with quadratic programming has been used to derive the spectrum; however, because both the experimental data and the spectrum change by more than an order of magnitude, the input data and the spectrum are normalized in order for the numerical procedure to be accurate. Accurate determination of the relaxation spectrum requires that the spectrum extend about two logarithmic decades on either side of the frequency range of the input data. The spectrum calculated from G′ alone is more accurate at shorter relaxation times, while that from G′ data alone is obtained from a combination of G′ and G′ data, blended in the manner described herein. Comparison with existing methods in the literature shows a consistently improved performance of the present method illustrated with both model as well as experimental data. © 1997 John Wiley & Sons, Inc. J Appl Polym Sci 64: 2177–2189, 1997  相似文献   

12.
一种混杂系统数据校正新方法   总被引:2,自引:0,他引:2  
张奇然  荣冈 《化工学报》2005,56(6):1057-1062
对于既包含连续生产过程又包含离散事件的混杂系统,尤其是对于带有生产方案切换的实际生产过程,通过在物料平衡模型中引入随机调度方程,从而构造出包含随机调度方程参数变量θ的新型协调模型,然后利用一种不确定模型的协调算法对此模型进行求解,最后,通过仿真研究证实了该方法的有效性和鲁棒性.  相似文献   

13.
The main focus of this work was to elucidate the further question of whether the color change correlated linearly with the surface temperature alteration or not. We selected and grouped the colored samples, which were in the form of textile, ceramic, plastic, paint, and ink. Those samples were first measured by IR Thermometer to record exact surface temperature, followed by an immediate color measurement using a spectrophotometer. The color variations of these samples were recorded from about 20°C to 60°C. The trend of CIELAB color coordinates was plotted against surface temperature. The dependency between each CIE colorimetric coordinate and the object's surface temperature was statistically evaluated using Pearson's r, R value, and R‐square analysis. A very strong correlation was observed for ceramic, paint, and ink samples tested, while the textile and plastic sample also exhibited a strong trend. The results added new information about the potential correlation between colorimetric data and temperature. Implications for the future research are discussed.  相似文献   

14.
In order to address the issue of minor fault detection in nonlinear dynamic processes, this paper proposes a fault detection method based on generalized non-negative matrix projection-maximum mean discrepancy (GNMP-MMD). Firstly, the GNMP is employed to acquire the residual scores of the samples. Subsequently, a sliding window approach is integrated with MMD for real-time monitoring of sample status within the residual subspace. In this study, GNMP is utilized to mitigate the impact of non-Gaussianity in data distribution, while MMD serves to alleviate autocorrelation among samples. A numerical case and experimental data collected from the DAMADICS process are utilized to simulate and validate the proposed method. Compared to traditional principal component analysis (PCA), dynamic principal component analysis (DPCA), dynamic kernel principal component analysis (DKPCA), non-negative matrix factorization (NMF), GNMP, and MMD, the experiment results clearly illustrate the feasibility of the proposed method.  相似文献   

15.
16.
In this article we present an integrated approach for the treatment and analysis of FCC experimental data based on statistical techniques and the use of characteristic curves. The method involves a material balance reconciliation procedure that improves the reliability and precision of the experimental results. After data reconciliation, we propose the use of some characteristic curves that describe in a proper way the behavior of conversion and selectivity. The characteristic curves are used to reach three main objectives: detect suspicious points (outliers), verify the correct trend of a experimental set of results and determine the discrimination capacity of the results when comparing two sets of data. These curves are based on fundamental knowledge of the process and a five-lump kinetic scheme. The coke characteristic curve takes into account a selective deactivation scheme for this product. The curves have parameters that are estimated by fitting to the experimental data using non-linear least-square methods. The systematic utilization of this procedure is a powerful tool for a reliable analysis of FCC evaluation results.  相似文献   

17.
A simulation study of heterogeneously catalyzed reactive distillation experiments carried out with the D + R tray, a novel type of laboratory equipment, is presented. One advantage of the D + R tray is that reaction and distillation are alternating stage‐wise, in a well‐defined way that can be modeled straightforwardly. An equilibrium stage model is used to describe the distillation and a plug flow reactor model to describe the catalyst bed reactors. The model parameters are derived from a systematic experimental characterization of the D + R tray both as a reactor and as a distillation unit. A validated physicochemical fluid property model is used. The primary experimental data are reconciled. Results from the predictive simulations are in good agreement with the experimental results. The influence of errors in the input parameters on the simulation results is investigated by means of a sensitivity and error analysis. © 2012 American Institute of Chemical Engineers AIChE J, 59: 1533–1543, 2013  相似文献   

18.
This paper proposes a novel correlation metrics-based convolutional neural network (CNN) classification model for chemical process fault diagnoses, creating a heuristic representation concerning process variable locations in grey correlation space (GCS) in terms of the copula entropy to guide the learning of classifiers. The proposed method based on correlation metrics can help solve the problem of insufficient information caused by a lack of labelled data. Specifically, variable correlations are fused into a heuristic matrix to provide prior knowledge for network learning in compensating data information before the CNN is employed to build the classifier for mining features in GCS. Driven by this mechanism, fault classifications in the case of small numbers of fault samples are successfully implemented. With successful simulation experiments carried out on the Tennessee Eastman (TE) process platform, we found that in GCS, different fault samples can represent hugely different features, while data resulting from the same fault rarely contribute to different ones. This observation lays a solid foundation for constructing superior fault classifiers. In addition, compared with conventional approaches, the proposed method has demonstrated better fault classification performances in the case of limited labelled fault samples.  相似文献   

19.
The spectral power values representing Thornton's “alternative primary” PC, NP, and AP colour matching functions (CMF) are compared with the power values representing the 49‐observer Stiles–Burch average definition. The Thornton measurements are first converted by matrix transformation into a data set expressed in terms of spectral power at the Stiles–Burch primary wavelengths. Graphs and power ratios are used to compare the definitions for two alternative matches to the same visual stimulus. A triplet of n:n spectral‐power ratios (one in each dimension, R, G, and B) is used to quantify the differences between the alternative matches. The relationship between the Thornton PC and Stiles–Burch match‐definitions is then found to deviate from the expected power‐ratio of 1:1 after matrix transformation. The revealed relationship is an internally consistent and smooth function of matched wavelength, which has a different nonlinear characteristic in each R, G, and B dimension relative to the Stiles–Burch reference model. The “Thornton bow‐tie” phenomenon is also demonstrated between a pair of maximum saturation CMF definitions made with alternative primaries. The implicit differences in neutral axis definition represented by the bow‐tie diagram are linked to differences in trichromatic unit (T‐unit) definition. In this case, the conventional CMF normalization process is postulated to be inaccurate at the wavelengths concerned, resulting in incompatibility between the T‐unit definitions of the two primary sets being compared. The conventional N→3 T‐unit definition of visual neutrality equating Illuminant SE to a single R:G:B power ratio is extended, by adding an extra NN mapping to the definition. The resulting NN→3 mapping is in principle a fully determined redefinition of three‐dimensional T‐unit equivalence, in which many R:G:B ratios for a comprehensive set of visually neutral metamers can be mapped by NN transformation onto the conventional single ratio. The effect of NN mapping is to transform spectral power distributions (SPDs) into spectral effect distributions (SEDs) expressed in T‐units. The SPD/SED transform, thus defined, is proposed as a method for unifying CMF determinations made with alternative primaries. The expected outcome is that after transforming SPDs by NN mapping into SEDs the definitions for all visually matching metamers will be demonstrably interconvertible by matrix product. © 2004 Wiley Periodicals, Inc. Col Res Appl, 29, 438–450, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.20069  相似文献   

20.
Experimental data of reaction kinetics are usually in the form of concentration versus time. For kinetics investigation it is more convenient to have the data in the form of reaction rate versus concentration. Converting time-concentration data into concentration-reaction rate data is an ill-posed problem in the sense that if inappropriate methods are used the noise in the original data will be amplified leading to unreliable results. This paper describes a conversion procedure, independent of reaction rate model or mechanism, that manages to keep noise amplification under control. The performance of this procedure is demonstrated by applying it to several sets of published kinetic data. Since these data are accompanied by their rate equations, the computed rates are used to obtain the unknown parameters in these equations. Comparison of these parameters with published figures and the ease with which they are obtained highlights the advantages of the new procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号