首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Soft sensors are widely used to estimate process variables that are difficult to measure online. However, their predictive accuracy gradually decreases with changes in the state of the plants. We have been constructing soft sensor models based on the time difference of an objective variable, y, and that of explanatory variables (time difference models) for reducing the effects of deterioration with age such as the drift without model reconstruction. In this paper, we have attempted to improve and estimate the prediction accuracy of time difference models, and proposed to handle multiple y-values predicted from multiple intervals of time difference. A weighted average is a final predicted value and the standard deviation is an index of its prediction accuracy. This method was applied to real industrial data and then, could predict more number of data with higher predictive accuracy and estimate the prediction errors more accurately than traditional ones.  相似文献   

2.
针对工业过程中由于系统存在延时导致软测量模型难以建立、模型精度偏低等问题,提出将系统延时(T)与最小二乘支持向量回归机(LSSVR)相结合,构建一种基于T-LSSVR的动态软测量建模方法;该方法在建模过程中利用互相关函数与一阶广义差分算法辨识得到“静态响应延时”和“动态响应延时”,通过软测量手段对变量进行预测以实现辅助变量对主导变量的最佳估计。对某化工企业具有此类双延时性质的系统进行实验,实验结果表明该建模方法在动态和稳态数据预测方面都有良好的预测效果。  相似文献   

3.
Utilizing support vector machine in real-time crash risk evaluation   总被引:1,自引:0,他引:1  
Real-time crash risk evaluation models will likely play a key role in Active Traffic Management (ATM). Models have been developed to predict crash occurrence in order to proactively improve traffic safety. Previous real-time crash risk evaluation studies mainly employed logistic regression and neural network models which have a linear functional form and over-fitting drawbacks, respectively. Moreover, these studies mostly focused on estimating the models but barely investigated the models’ predictive abilities. In this study, support vector machine (SVM), a recently proposed statistical learning model was introduced to evaluate real-time crash risk. The data has been split into a training dataset (used for developing the models) and scoring datasets (meant for assessing the models’ predictive power). Classification and regression tree (CART) model has been developed to select the most important explanatory variables and based on the results, three candidates Bayesian logistic regression models have been estimated with accounting for different levels unobserved heterogeneity. Then SVM models with different kernel functions have been developed and compared to the Bayesian logistic regression model. Model comparisons based on areas under the ROC curve (AUC) demonstrated that the SVM model with Radial-basis kernel function outperformed the others. Moreover, several extension analyses have been conducted to evaluate the effect of sample size on SVM models’ predictive capability; the importance of variable selection before developing SVM models; and the effect of the explanatory variables in the SVM models. Results indicate that (1) smaller sample size would enhance the SVM model's classification accuracy, (2) variable selection procedure is needed prior to the SVM model estimation, and (3) explanatory variables have identical effects on crash occurrence for the SVM models and logistic regression models.  相似文献   

4.
Mathematical models of voltage-gated ion channels are used in basic research, industrial and clinical settings. These models range in complexity, but typically contain numerous variables representing the proportion of channels in a given state, and parameters describing the voltage-dependent rates of transition between states. An open problem is selecting the appropriate degree of complexity and structure for an ion channel model given data availability. Here, we simplify a model of the cardiac human Ether-à-go-go related gene (hERG) potassium ion channel, which carries cardiac IKr, using the manifold boundary approximation method (MBAM). The MBAM approximates high-dimensional model-output manifolds by reduced models describing their boundaries, resulting in models with fewer parameters (and often variables). We produced a series of models of reducing complexity starting from an established five-state hERG model with 15 parameters. Models with up to three fewer states and eight fewer parameters were shown to retain much of the predictive capability of the full model and were validated using experimental hERG1a data collected in HEK293 cells at 37°C. The method provides a way to simplify complex models of ion channels that improves parameter identifiability and will aid in future model development.  相似文献   

5.
Real-time performance and accuracy are two most challenging requirements in virtual surgery training. These difficulties limit the promotion of advanced models in virtual surgery, including many geometric and physical models. This paper proposes a physical model of virtual soft tissue, which is a twist model based on the Kriging interpolation and membrane analogy. The proposed model can quickly locate spatial position through Kriging interpolation method and accurately compute the force change on the soft tissue through membrane analogy method. The virtual surgery simulation system is built with a PHANTOM OMNI haptic interaction device to simulate the torsion of virtual stomach and arm, and further verifies the real-time performance and simulation accuracy of the proposed model. The experimental results show that the proposed soft tissue model has high speed and accuracy, realistic deformation, and reliable haptic feedback.  相似文献   

6.
Principal component regression (PCR) has been widely used for soft sensor modeling and quality prediction in last several decades, which is still very popular for both academy researches and industry applications. However, most PCR models are determined by the projection method, which may lack probabilistic interpretation for the process data. In fact, due to the inevitable process noise, most process data are inherently random variables. Several probabilistic PCA methods have already been proposed in the past years. Compared to the deterministic modeling method, the probabilistic model is more appropriate to characterize the behavior of the random variables in the process. This paper first presents a probabilistic derivation of the PCR model (PPCR) and then extends it to the mixture form (MPPCR). For quality prediction of processes with multiple operation modes, a mixture probabilistic soft sensor is developed based on the MPPCR model. Simultaneously, the information of the operation mode can also be located by the proposed soft sensor. To evaluate the performance of the MPPCR model, a numerical example and a benchmark simulation case study of the Tennessee Eastman process are provided. Different methods have been compared with the proposed model, including the global, local, and multi-local PCR models. As a result, the proposed MPPCR model performs the best among these methods.  相似文献   

7.
Estimation of mill power-draw can play a critical role in economics, operation and control standpoints of the entire mineral processing plants since the cost of milling is the single biggest expense within the process. Thus, several empirical power-draw prediction models have been generated based on a combination of laboratory, pilot and full-scale measurements of different milling conditions. However, they cannot be used in industrial plants, where in full-scale operations, only not few numbers of input parameters used in those models are measured. Moreover, empirical models do not assess the relationship between input features. This investigation is going to introduce random forest (RF), as a predictive model, beside of its associated variable importance measures system, as a sensible means for variable selection, to overcome drawbacks of empirical models. Although RF as a powerful modeling tool has been used in several problem solving systems, it has not comprehensively considered in the powder technology areas. In this investigation, an industrial ball mill database from Chadormalu iron ore processing plant were used to develop a RF model and explore relationships between power-draw and other monitored operating parameters. Modeling results indicated that RF can highly improve the prediction accuracy of power-draw as compared to the regression as a typical method (R2: 0.98 vs. 0.60, respectively) and rank operational milling parameters based on their importance.  相似文献   

8.
针对多变量控制系统在受到干扰时,自由度低、鲁棒性差的缺点,在三角区间软约束控制算法的基础上,提出一种基于梯形区间软约束的多目标预测控制算法。首先在容忍区间外设置梯形区间,使控制过程分为两个部分,以减小控制初期瞬态偏差对系统造成的影响;为了缓解工业过程中各控制目标相互耦合的问题,构建两个目标函数,并用迭代算法对ε-约束法进行改进,优化目标函数,以减小目标函数的求解误差,实现对多入多出系统的协调控制。通过对壳牌公司的重油分馏塔模型仿真实验,将三角区间软约束控制算法与该算法进行对比,结果表明该算法具有较好的鲁棒性和快速性。  相似文献   

9.
This paper analyzes a number of strategies that are devoted to improving the generalization capabilities of neural-network-based soft sensors when only small data sets are available. The aim of this paper is to search for a strategy that is able to cope with the problem of scarcity of experimental data, which often arises in industrial applications. The strategies that are considered are based on the manipulation of experimental training data sets to increase their diversity either by injecting noise into the available data or by using the bootstrap resampling approach. A new method, which is based on an aggregation of neural models, trained on different training data sets, which are obtained by noise injection and bootstrap resampling, is proposed in this paper. The methods considered were compared in an industrial case study regarding the design of a backup soft sensor for a thermal cracking unit, working in a refinery in Sicily, Italy. The results of the case study show that all the methods considered produced an improvement in the estimation capability of the models. The best performance was obtained by using the method proposed by the authors.   相似文献   

10.
In process optimization, the setting of the process variables is usually determined by estimating a function that relates the quality to the process variables and then optimizing this estimated function. However, it is difficult to build an accurate function from process data in industrial settings because the process variables are correlated, outliers are included in the data, and the form of the functional relation between the quality and process variables may be unknown. A solution derived from an inaccurate function is normally far from being optimal. To overcome this problem, we use a data mining approach. First, a partial least squares model is used to reduce the dimensionality of the process and quality variables. Then the process settings that yield the best output are identified by sequentially partitioning the reduced process variable space using a rule induction method. The proposed method finds an optimal setting from historical data without constructing an explicit quality function. The proposed method is illustrated with two examples obtained from steel making processes. We also show, through simulation, that the proposed method gives more stable results than estimating an explicit function even when the form of the function is known in advance.  相似文献   

11.
Biphasic hyperelastic models have become popular for soft hydrated tissues, and there is a pressing need for appropriate identification methods using full-field measurement techniques such as digital volume correlation. This paper proposes to address this need with the virtual fields method (VFM). The main asset of the proposed approach is that it avoids the repeated resolution of complex nonlinear finite element models. By choosing special virtual fields, the VFM approach can extract hyperelastic parameters of the solid part of the biphasic medium without resorting to identifying the model parameters driving the osmotic effects in the interstitial fluid. The proposed approach is verified and validated through three different examples: the first and second using simulated data and then the third using experimental data obtained from porcine descending thoracic aortas samples in osmotically active solution.  相似文献   

12.
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables—regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used.  相似文献   

13.
Model validation is critical in predicting the performance of manufacturing processes. In predictive regression, proper selection of variables helps minimize the model mismatch error, proper selection of models helps reduce the model estimation error, and proper validation of models helps minimize the model prediction error. In this paper, the literature is briefly reviewed and a rigorous procedure is proposed for evaluating the validation and data splitting methods in predictive regression modeling. Experimental data from a honing surface roughness study will be used to illustrate the methodology. In particular, the individual versus average data splitting methods as well as the fivefold versus threefold cross-validation methods are compared. This paper shows that statistical tests and prediction errors evaluation are important in subset selection and cross-validation of predictive regression models. No statistical differences were found between the fivefold and the threefold cross-validation methods, and between use of the individual and average data splitting methods in predictive regression modeling.  相似文献   

14.
Abstract

Principal component analysis (PCA) has been extensively used for the monitoring of industrial systems when the measurements are highly correlated or corrupted with noise. The basic assumptions for monitoring using PCA are that the considered processes are stationary and their operating regions are unchanged. In an actual plant, the operation conditions might sometimes vary with time because of its dynamic behavior or the effects of disturbance. In this paper, a new predictive monitoring method is proposed that is composed of three parts: (1) the training data is divided into several parts, and each part stands for the operating region at that stage (2) PCA is applied to the first part of the divided raw data sets, and the other parts will be projected by the same PCA model (3) time series models are built to interpret the operating centers obtained in step 2, and the operating region can be estimated for future monitoring. From these, a more reasonable monitoring region and future process deviations can be built. Based on this monitoring scheme, false alarms will be reduced. Moreover, a measure of the difference of the principal component directions between the training data set and the monitored data set is used to check whether a process fault occurs in processes. The effectiveness of this proposed method is demonstrated with simulation results.  相似文献   

15.
Shimokawa  Toshio  Li  Li  Yan  Kun  Kitamura  Shinnichi  Goto  Masashi 《Behaviormetrika》2014,41(2):225-244

Ensemble learning, which combines multiple base learners to improve statistical prediction accuracy, is frequently used in statistical science and data mining. However, because of their “black box” nature, ensemble learning models are difficult to interpret. A recently proposed rule ensemble method known as RuleFit presents the base learner as a production rule and also generates a measure that influences the response variable. The RuleFit method for binary response applies a squared-error ramp loss function, and base learners are weighted by shrinkage regression using the lasso method. Thus, RuleFit is not constructed by a logistic regression model. Moreover, highly correlated pairs of base learners may be excessively pruned by the lasso method. In this study, we solved the excess pruning problem by constructing RuleFit within a logistic regression framework, weighting the base learners by elastic net. The effectiveness ofour proposed RuleFit model is illustrated through a real data set. In small-scale simulations, this method demonstrated higher predictive performance than the original RuleFit model.

  相似文献   

16.
Recently there has been renewed interest in assessing the predictive accuracy of existing parametric models of creep properties, with the recently develop Wilshire methodology being largely responsible for this revival. Without exception, these studies have used multiple linear regression analysis (MLRA) to estimate the unknown parameters of the models, but such a technique is not suited to data sets where the predictor variables are all highly correlated (a situation termed multicollinearity). Unfortunately, because all existing long-term creep data sets incorporate accelerated tests, multicollinearity will be an issue (when temperature is held high, stress is always set low yielding a negative correlation). This article quantifies the severity of this potential problem in terms of its effect on predictive accuracy and suggests a neat solution to the problem in the form of partial least squares analysis (PLSA). When applied to 1Cr–1Mo–0.25V steel, it was found that when using MLRA nearly all the predictor variables in various parametric models appeared to be statistically insignificant despite these variables accounting for over 90% of the variation in log times to failure. More importantly, the same linear relationship appeared to exist between the first PLS component and the log time to failure in both short and long times to failure and this enabled more accurate extrapolations to be made of the time to failure, compared to when the models were estimated using MLRA.  相似文献   

17.
Elimination of uninformative variables for multivariate calibration   总被引:4,自引:0,他引:4  
A new method for the elimination of uninformative variables in multivariate data sets is proposed. To achieve this, artificial (noise) variables are added and a closed form of the PLS or PCR model is obtained for the data set containing the experimental and the artificial variables. The experimental variables that do not have more importance than the artificial variables, as judged from a criterion based on the b coefficients, are eliminated. The performance of the method is evaluated on simulated data. Practical aspects are discussed on experimentally obtained near-IR data sets. It is concluded that the elimination of uninformative variables can improve predictive ability.  相似文献   

18.
将结构体系中不确定参数定义为区间变量,在随机疲劳谱分析方法的基础上,提出一种计算平稳高斯荷载作用下不确定结构疲劳损伤的新方法。该方法采用区间参数模型定义结构的不确定性,应用功率谱密度描述外荷载的随机性;利用有理级数和单位对称区间显式表达结构区间频响函数和不确定结构在平稳高斯荷载作用下的动力响应区间;根据Tovo-Benasciutti疲劳损伤预测模型,计算不确定结构在随机荷载作用下的疲劳损伤区间期望率;并可通过调整相应不确定参数的单位对称区间近似估计该不确定参数不同不确定半径的疲劳损伤区间期望率。通过数值算例,将该文提出的随机疲劳区间分析方法与顶点法进行比较,验证了该方法的准确性和适用性。  相似文献   

19.
20.
Abstract:  The correct modelling of constitutive laws is of critical importance for the analysis of mechanical behaviour of solids and structures. For example, the understanding of soft tissue mechanics, because of the nonlinear behaviour commonly displayed by the mechanical properties of such materials, makes common place the use of hyperelastic constitutive models. Hyperelastic models however, depend on sets of variables that must be obtained experimentally. In this study the authors use a computational/experimental scheme, for the study of the nonlinear mechanical behaviour of biological soft tissues under uniaxial tension. The material constants for seven different hyperelastic material models are obtained via inverse methods. The use of Martins's model to fit experimental data is presented in this paper for the first time. The search for an optimal value for each set of material parameters is performed by a Levenberg–Marquardt algorithm. As a control measure, the process is fully applied to silicone-rubber samples subjected to uniaxial tension tests. The fitting accuracy of the experimental stress–strain relation to the theoretical one, for both soft tissues and silicone-rubber (typically nonlinear) is evaluated. This study intents also to select which material models (or model types), the authors will employ in future works, for the analysis of human soft biological tissues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号