首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 73 毫秒
1.
给出了一种基于新鲁棒目标函数的数据校正方法,分析了目标函数的性质及其影响函数,表明了该方法对显著误差具有较强的鲁棒性。对一个线性和非线性化工过程进行了仿真研究,并与常用的Huber鲁棒估计法和Fair鲁棒估计法进行了对比分析。  相似文献   

2.
基于神经网络的化工过程测量数据在线校正技术的研究   总被引:6,自引:0,他引:6  
研究了人工神经网络在化工过程测量数据校正中的应用,提出了新的样本构造方法和神经网络的在线训练策略。对某乙烯装置裂解气分离系统测量数据,应用自行设计开发的改进算法的神经网络与数据校正系统集成运行,结果表明基于神经网络的数据校正技术能对测量数据中所含的随机误差和过失误差进行同时校正,提高过程数据的精度和校正过程的稳定性,同时满足数据校正的实时性要求。  相似文献   

3.
测量仪表过失误差的单结点识别方法   总被引:6,自引:0,他引:6       下载免费PDF全文
王希若  荣冈 《化工学报》2000,51(1):17-22
提出了一种新的过失误差识别方法 ,即结合仪表的可靠度、精度等级等信息 ,利用单结点的约束残差进行过失误差识别 .给出仿真实例 ,通过与其他几种常用的误差识别方法对比对这一方法进行了评价 .  相似文献   

4.
稳态系统的过失误差识别   总被引:2,自引:1,他引:2  
数据校正包括数据协调和过失误差侦破与识别两部分,其中过失误差的侦破与识别一直是数据校正的重点和难点所在。针对系统偏差型的过失误差,研究了稳态系统中含有多个这失误差情况下的过失误差侦破与识别问题。提出了系统的过失误差可识别性的概念,分析了稳态系统的特性,指出了系统过失误差可识别的条件,并提出了过失误差的参数估计识别方法。计算实例表明,此方法可以准确地识别出系统所含的多个过失误差,具有很重要的理论意义。  相似文献   

5.
过程工业测量数据中过失误差的侦破与校正   总被引:8,自引:2,他引:6  
杨友麒  滕荣波 《化工学报》1996,47(2):248-253
  相似文献   

6.
韩充 《山东化工》2013,(4):91-94
鲁棒数据校正方法能有效得进行数据协调和过失误差侦破。然而,当泄露、积累等问题导致模型不准确时,传统的鲁棒数据校正技术无法得到满意的结果。针对这种情况,本文对鲁棒数据校正技术做了改进,将相对残差大于临界值的等式约束变为不等式约束,同时将原来等式约束的相对残差加入到鲁棒目标函数中,使得该方法适用于过程模型不准确的情况。实例运算表明,本文提出的方法对模型准确或者模型不准确的情况都有较好的数据校正结果。  相似文献   

7.
何戡  刘绍鼎  王贵成  郭金玉 《辽宁化工》2006,35(4):222-224,227
简要地介绍了数据校正中的基本概念;然后从数据协调、显著误差检测、数据校正三个方面较系统地、全面地总结了数据校正方法的研究进展和成就;最后探讨了这一领域中值得进一步研究的问题和可能的发展方向。  相似文献   

8.
一种用于显著误差检测与数据校正的NT-MT组合算法   总被引:1,自引:0,他引:1       下载免费PDF全文
An NT-MT combined method based on nodal test (NT) and measurement test (MT) is developed for gross error detection and data reconciliation for industrial application. The NT-MT combined method makes use of both NT and MT tests and this combination helps to overcome the defects in the respective methods. It also avoids any artificial manipulation and eliminates the huge combinatorial problem that is created in the combined method based on the nodal test in the case of more than one gross error for a large process system. Serial compensation strategy is also used to avoid the decrease of the coefficient matrix rank during the computation of the proposed method. Simulation results show that the proposed method is very effective and possesses good performance.  相似文献   

9.
小波滤波能有效降低化工过程测量数据的随机误差,但却无法识别测量数据中是否存在过失误差。为此,本文通过总结大量小波滤波数据校正实例中校正值、分解层数与过失误差之间存在的关系,提出了三者之间的关系公式,并根据此公式侦破识别过失误差。对Aspen Dynamic模拟产生的测量数据的校正结果表明,文中提出的公式准确的反映出了校正值、分解层数和过失误差的关系,并且利用该公式能够有效地侦破和识别过失误差。  相似文献   

10.
准确可靠的测量数据是实现装置过程控制、模拟、优化和生产管理的前提条件,而通过仪表测量获取的过程数据中可能存在过失误差,直接影响数据校正的准确性,现有的数据校正方法不能完全有效地避免过失误差的影响。根据双权M-估计的原理,今以相对残差为变量构造了一种新型的具有强鲁棒性的目标函数,使含有过失误差的变量对函数的贡献为一常数,从而避免了过失误差对数据校正过程的影响。选取了具有代表性的一个线性问题和一个非线性问题进行实例研究,并与现有的Huber法和Cauchy法进行了对比分析。计算结果表明,对线性系统和非线性系统,新方法的过失误差侦破性能均优于Huber法和Cauchy法,且其稳定性更高。因此,在进行数据校正时应首选新方法。  相似文献   

11.
In order to derive higher value operational knowledge from raw process measurements, advanced techniques and methodologies need to be exploited. In this paper a methodology for online steady-state detection in continuous processes is presented. It is based on a wavelet multiscale decomposition of the temporal signal of a measured process variable, which simultaneously allows for two important pre-processing tasks: filtering-out the high frequency noise via soft-thresholding and correcting abnormalities by analyzing the maximums of wavelet transform modulus. Wavelet features involved in the pre-processing task are simultaneously exploited in analyzing a process trend of measured variable. The near steady state starting and ending points are identified by using the first and the second order of wavelet transform. Simultaneously a low filter with a probability density function is employed to approximate the duration of a near stationary condition. The method provides an improvement in the quality of steady-state data sets, which will directly improve the outcomes of data reconciliation and manufacturing costs. A comparison with other steady-state detection methods on an example of case study indicates that the proposed methodology is efficient in detecting steady-state and suitable for online implementation.  相似文献   

12.
Process measurements collected from daily industrial plant operations are essential for process monitoring, control, and optimization. However, those measurements are generally corrupted by errors, which include gross errors and random errors. Conventionally, those two types of errors were addressed separately by gross error detection and data reconciliation. Solving the simultaneous gross error detection and data reconciliation problem using the hierarchical Bayesian inference technique is focused. The proposed approach solves the following problems in a unified framework. First, it detects which measurements contain gross errors. Second, the magnitudes of the gross errors are estimated. Third, the covariance matrix of the random errors is estimated. Finally, data reconciliation is performed using the maximum a posteriori estimation. The proposed algorithm is applicable to both linear and nonlinear systems. For nonlinear case, the algorithm does not involve any linearization or approximation steps. Numerical case studies are provided to demonstrate the effectiveness of the proposed method. © 2015 American Institute of Chemical Engineers AIChE J, 61: 3232–3248, 2015  相似文献   

13.
蒋余厂  刘爱伦 《化工学报》2011,62(6):1626-1632
引言 在实际工业过程中,由于过程测量数据的不平衡性和不完备性,给过程分析和研究工作带来了很多困难,甚至失败.因此必须对过程数据进行校正,然而目前的数据校正方法大部分是面对稳态过程的,但实际情况中过程的条件更多地是处在变化之中,此时稳态数据校正方法已不能满足要求.  相似文献   

14.
In a previous study, a nonlinear dynamic data reconciliation procedure (NDDR) based on the particle swarm optimization (PSO) method was developed and validated in line and in real time with actual industrial data obtained for an industrial polypropylene reactor (Prata et al., 2009, Prata et al., 2008b). The procedure is modified to allow for robust implementation of the NDDR problem with simultaneous detection of gross errors and estimation of model parameters. The negative effects of the less frequent gross errors are eliminated with the implementation of the Welsch robust estimator, avoiding the computation of biased estimates and implementation of iterative procedures for detection and removal of gross errors. The performance of the proposed procedure was tested in line and in real time in an industrial bulk propylene polymerization process. A phenomenological model of the real process, based on the detailed mass and energy balances and constituted by a set of algebraic-differential equations, was implemented and used for interpretation of the actual plant behavior. The resulting nonlinear dynamic optimization problem was solved iteratively on a moving time window, in order to capture the current process behavior and allow for dynamic adaptation of model parameters. Results indicate that the proposed procedure, based on the combination of the PSO method and the robust Welsch estimator, can be implemented in real time in real industrial environments, allowing for the simultaneous detection of gross errors and estimation of process states and model parameters, leading to more robust and reproducible numerical performance.  相似文献   

15.
The focus of this short note is to highlight several techniques to solve industrial nonlinear data reconciliation problems. The main areas of discussion are starting value generation, row and column scaling, regularization of the kernel matrix, using different and independent unconstrained solving methods such as ridge regression, matrix projection, Newton’s method and singular value decomposition and infeasibility handling. These techniques are usually necessary to arrive at solutions to nonlinear reconciliation problems which are poorly initiated, ill-conditioned and even inconsistent. A relatively large and well-studied numerical example is solved taken from the mining process industry which demonstrates some of the techniques discussed.  相似文献   

16.
Gross error detection is crucial for data reconciliation and parameter estimation, as gross errors can severely bias the estimates and the reconciled data. Robust estimators significantly reduce the effect of gross errors (or outliers) and yield less biased estimates. An important class of robust estimators are maximum likelihood estimators or M-estimators. These are commonly of two types, Huber estimators and Hampel estimators. The former significantly reduces the effect of large outliers whereas the latter nullifies their effect. In particular, these two estimators can be evaluated through the use of an influence function, which quantifies the effect of an observation on the estimated statistic. Here, the influence function must be bounded and finite for an estimator to be robust. For the Hampel estimators the influence function becomes zero for large outliers, nullifying their effect. On the other hand, Huber estimators do not reject large outliers; their influence function is simply bounded. As a result, we consider the three part redescending estimator of Hampel and compare its performance with a Huber estimator, the Fair function. A major advantage to redescending estimators is that it is easy to identify outliers without having to perform any exploratory data analysis on the residuals of regression. Instead, the outliers are simply the rejected observations. In this study, the redescending estimators are also tuned to the particular observed system data through an iterative procedure based on the Akaike information criterion, (AIC). This approach is not easily afforded by the Huber estimators and this can have a significant impact on the estimation. The resulting approach is incorporated within an efficient non-linear programming algorithm. Finally, all of these features are demonstrated on a number of process and literature examples for data reconciliation.  相似文献   

17.
Accuracy of an instrument has been traditionally defined as the sum of the precision and the bias. Recently, this notion was generalized to estimators [Bagajewicz, M. (2005a). On the definition of software accuracy in redundant measurement systems. AIChE Journal, 51(4), 1201–1206]. The definition was based on the maximum undetected bias and ignored the frequency of failure, thus providing an upper bound. In more recent work [Bagajewicz, M. (2005b). On a new definition of a stochastic-based accuracy concept of data reconciliation-based estimators. In European Symposium on Computer-Aided Process Engineering Proceeding (ESCAPE)], a more realistic concept of expected value of accuracy was presented. However, only the timing of failure and the condition of failure was sampled. In this paper we extend the Monte Carlo simulations to also sample the size of the gross errors and we provide new insights on the evolution of biases through time.  相似文献   

18.
Process data measurements are important for process monitoring, control, optimization, and management decision making. However, process data may be heavily deteriorated by measurement biases and process leaks. Therefore, it is significant to simultaneously estimate biases and leaks with data reconciliation. In this paper, a novel strategy based on support vector regression (SVR) is proposed to achieve simultaneous data reconciliation and joint bias and leak estimation in steady processes. Although the linear objective function of the SVR approach proposed is robust with little computational burden, it would not result in the maximum likelihood estimate. Therefore, to ensure accurate estimates, the maximum likelihood estimate is applied based on the result of the SVR approach. Simulation and comparison results of a linear recycle system and a nonlinear heat-exchange network demonstrate that the proposed strategy is effective to achieve data reconciliation and joint bias and leak estimation with superior performances.  相似文献   

19.
The application of nonlinear dynamic data reconciliation to plant data   总被引:3,自引:0,他引:3  
We have extended a fairly comprehensive data reconciliation approach called nonlinear dynamic data reconciliation (NDDR) that was originally presented by Liebman et al. (1994, Comput. Chem. Engng, 16, 963–986). This approach is capable of reconciling data from both steady-state and dynamic processes as well as estimating parameters and unmeasured process variables. One recently added feature is the ability to detect measurement bias. Each of these features were developed and tested using computer simulation. In this paper we report the successful application of NDDR to reconcile actual plant data from an Exxon Chemicals process.  相似文献   

20.
For a steady-state process the accuracy of reconciled data may be measured by the trace of its covariance matrix of estimation errors. Quantitative relations are derived for the effects of adding and removing single measurements on estimation accuracy. It is proved that redundancy will never adversely affect estimation accuracy. It will always enhance estimation accuracy, if the measurements relate the process variables in a different way from the constraints. These relations are utilized to develop evolutionary strategies for selecting an optimal measurement structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号