首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Feasibility analysis is used to determine the feasible region of a multivariate process. This can be difficult when the process models include black‐box constraints or the simulation is computationally expensive. To address such difficulties, surrogate models can be built as an inexpensive approximation to the original model and help identify the feasible region. An adaptive sampling method is used to efficiently sample new points toward feasible region boundaries and regions where prediction uncertainty is high. In this article, cubic Radial Basis Function (RBF) is used as the surrogate model. An error indicator for cubic RBF is proposed to indicate the prediction uncertainty and is used in adaptive sampling. In all case studies, the proposed RBF‐based method shows better performance than a previously published Kriging‐based method. © 2016 American Institute of Chemical Engineers AIChE J, 63: 532–550, 2017  相似文献   

2.
This article presents an integrated, simulation‐based optimization procedure that can determine the optimal process conditions for injection molding without user intervention. The idea is to use a nonlinear statistical regression technique and design of computer experiments to establish an adaptive surrogate model with short turn‐around time and adequate accuracy for substituting time‐consuming computer simulations during system‐level optimization. A special surrogate model based on the Gaussian process (GP) approach, which has not been employed previously for injection molding optimization, is introduced. GP is capable of giving both a prediction and an estimate of the confidence (variance) for the prediction simultaneously, thus providing direction as to where additional training samples could be added to improve the surrogate model. While the surrogate model is being established, a hybrid genetic algorithm is employed to evaluate the model to search for the global optimal solutions in a concurrent fashion. The examples presented in this article show that the proposed adaptive optimization procedure helps engineers determine the optimal process conditions more efficiently and effectively. POLYM. ENG. SCI., 47:684–694, 2007. © 2007 Society of Plastics Engineers.  相似文献   

3.
A new approach of using computationally cheap surrogate models for efficient optimization of simulated moving bed (SMB) chromatography is presented. Two different types of surrogate models are developed to replace the detailed but expensive full-order SMB model for optimization purposes. The first type of surrogate is built through a coarse spatial discretization of the first-principles process model. The second one falls into the category of reduced-order modeling. The proper orthogonal decomposition (POD) method is employed to derive cost-efficient reduced-order models (ROMs) for the SMB process. The trust-region optimization framework is proposed to implement an efficient and reliable management of both types of surrogates. The framework restricts the amount of optimization performed with one surrogate and provides an adaptive model update mechanism during the course of optimization. The convergence to an optimum of the original optimization problem can be guaranteed with the help of this model management method. The potential of the new surrogate-based solution algorithm is evaluated by examining a separation problem characterized by nonlinear bi-Langmuir adsorption isotherms. By addressing the feed throughput maximization problem, the performance of each surrogate is compared to that of the standard full-order model based approach in terms of solution accuracy, CPU time and number of iterations. The quantitative results prove that the proposed scheme not only converges to the optimum obtained with the full-order system, but also provides significant computational advantages.  相似文献   

4.
Process simulations can become computationally too complex to be useful for model-based analysis and design purposes. Meta-modelling is an efficient technique to develop a surrogate model using “computer data”, which are collected from a small number of simulation runs. This paper considers meta-modelling with time-space-dependent outputs in order to investigate the dynamic/distributed behaviour of the process. The conventional method of treating temporal/spatial coordinates as model inputs results in dramatic increase of modelling data and is computationally inefficient. This paper applies principal component analysis to reduce the dimension of time-space-dependent output variables whilst retaining the essential information, prior to developing meta-models. Gaussian process regression (also termed kriging model) is adopted for meta-modelling, for its superior prediction accuracy when compared with more traditional neural networks. The proposed methodology is successfully validated on a computational fluid dynamic simulation of an aerosol dispersion process, which is potentially applicable to industrial and environmental safety assessment.  相似文献   

5.
Multi-objective constrained optimization problems which arise in many engineering fields often involve computationally expensive black-box model simulators of industrial processes which have to be solved with limited computational time budget, and hence limited number of simulator calls. This paper proposes two heuristic approaches aiming to build proxy problem models, solvable by computationally efficient optimization methods, in order to quickly provide a sufficiently accurate approximation of the Pareto front. The first approach builds a multi-objective mixed-integer linear programming (MO-MILP) surrogate model of the optimization problem relying on piece-wise linear approximations of objectives and constraints obtained through brute-force sensitivity computation. The second approach builds a multi-objective nonlinear programming (MO-NLP) surrogate model using curve fitting of objectives and constraints. In both approaches the desired number of approximated solutions of the Pareto front are generated by applying the ɛ-constraint method to the multi-objective surrogate problems. The proposed approaches are tested for the cost vs. life cycle assessment (LCA)-based environmental optimization of drinking water production plants. The results obtained with both approaches show that a good quality approximation of Pareto front can be obtained with a significantly smaller computational time than with a state-of-the-art metaheuristic algorithm.  相似文献   

6.
We propose the integration of simplified graphically inspired feasibility constraints into optimization-based models for distillation network synthesis. The approach facilitates the use of surrogate, potentially data-based, distillation column models while considering feasibility in a computationally efficient manner. The proposed approach can aid the formulation of efficient approaches for preliminary distillation network synthesis.  相似文献   

7.
This paper presents a new optimization approach for minimizing the warpage defect of injection-molded plastic parts. Existing methods in warpage optimization are either computationally expensive or, when inexpensive surrogate models are employed with fixed set of sample points, the accuracy of the surrogate model will only be ensured by a large number of sample points, which in turn will increase the amount of required computation. To address this problem, this paper applies a mode-pursuing sampling (MPS) method for warpage optimization, by integrating injection molding simulation with MPS, and by proposing a reinforced convergence criterion for the optimization process, in an attempt to search for the optimal process parameters of injection molding for minimizing warpage defect both effectively and efficiently. The MPS method can systematically generate more sample points in the neighborhood of the current optimal solution while statistically covering the entire search space. A case study of a scanner frame, where injection time, melt temperature and mold temperature are selected as the design variables, demonstrates that the proposed optimization method can effectively decrease the warpage deflection of an injection-molded part with significantly less computation required. Based on the optimization results, the paper also studied the influences of different process parameters on the severity of the warpage defect, providing a guideline for the setting of the proper process parameters.  相似文献   

8.
基于递推PLS核算法的软测量在线学习方法   总被引:4,自引:2,他引:2       下载免费PDF全文
邵伟明  田学民  王平 《化工学报》2012,63(9):2887-2891
针对过程的动态时变特性,提出一种基于PLS核算法的软测量在线学习方法。该方法利用PLS核算法,通过递推学习具有代表性的新样本来改善模型的适应能力,较NIPALS算法具有更高的计算效率;并采用一种同时考虑输入和输出信息的相似度准则,有选择地删除一个或多个冗余样本,更有效地构建了训练样本集。工业聚丙烯熔融指数的软测量建模研究表明,本文提出的方法能够快速有效地跟踪牌号切换中熔融指数的变化。  相似文献   

9.
One of the key technical challenges associated with modeling particulate processes is the ongoing need to develop efficient and accurate predictive models. Often the models that best represent solids handling processes, like discrete element method (DEM) models, are computationally expensive to evaluate. In this work, a reduced‐order modeling (ROM) methodology is proposed that can represent distributed parameter information, like particle velocity profiles, obtained from high‐fidelity (DEM) simulations in a more computationally efficient fashion. The proposed methodology uses principal component analysis (PCA) to reduce the dimensionality of the distributed parameter information, and response surface modeling to map the distributed parameter data to process operating parameters. This PCA‐based ROM approach has been used to model velocity trajectories in a continuous convective mixer, to demonstrate its applicability for pharmaceutical process modeling. © 2014 American Institute of Chemical Engineers AIChE J, 60: 3184–3194, 2014  相似文献   

10.
In this work, an algorithm for the optimization of costly constrained systems is introduced. The proposed method combines advantages of global‐ and local‐search algorithms with new concepts of feasibility space mapping, within a framework that aims to find global solutions with minimum sampling. A global search is initially performed, during which kriging surrogate models of the objective and the feasible region are developed. A novel search criterion for locating feasibility boundaries is introduced, which does not require any assumptions regarding the convexity and nonlinearity of the feasible space. Finally, local search is performed starting from multiple locations identified by clustering of previously obtained samples. The performance of the proposed approach is evaluated through both benchmark examples and a case study from the pharmaceutical industry. A comparison of the method with commercially available software reveals that the proposed method has a competitive performance in terms of sampling requirements and quality of solution. © 2014 American Institute of Chemical Engineers AIChE J, 60: 2462–2474, 2014  相似文献   

11.
We generalize the applicability of interactive methods for solving computationally demanding, that is, time-consuming, multiobjective optimization problems. For this purpose we propose a new agent assisted interactive algorithm. It employs a computationally inexpensive surrogate problem and four different agents that intelligently update the surrogate based on the preferences specified by a decision maker. In this way, we decrease the waiting times imposed on the decision maker during the interactive solution process and at the same time decrease the amount of preference information expected from the decision maker. The agent assisted algorithm is not specific to any interactive method or surrogate problem. As an example we implement our algorithm for the interactive NIMBUS method and the PAINT method for constructing the surrogate. This implementation was applied to support a real decision maker in solving a two-stage separation problem.  相似文献   

12.
An algorithm is presented for identifying the projection of a scheduling model's feasible region onto the space of production targets. The projected feasible region is expressed using one of two mixed‐integer programming formulations, which can be readily used to address integrated production planning and scheduling problems that were previously intractable. Production planning is solved in combination with a surrogate model representing the region of feasible production amounts to provide optimum production targets, while a detailed scheduling is solved in a rolling‐horizon manner to define feasible schedules for meeting these targets. The proposed framework provides solutions of higher quality and yields tighter bounds than previously proposed approaches. © 2009 American Institute of Chemical Engineers AIChE J, 2009  相似文献   

13.
We study the problem of intervention effects generating various types of outliers in a linear count time‐series model. This model belongs to the class of observation‐driven models and extends the class of Gaussian linear time‐series models within the exponential family framework. Studies about effects of covariates and interventions for count time‐series models have largely fallen behind, because the underlying process, whose behaviour determines the dynamics of the observed process, is not observed. We suggest a computationally feasible approach to these problems, focusing especially on the detection and estimation of sudden shifts and outliers. We consider three different scenarios, namely the detection of an intervention effect of a known type at a known time, the detection of an intervention effect when the type and the time are both unknown and the detection of multiple intervention effects. We develop score tests for the first scenario and a parametric bootstrap procedure based on the maximum of the different score test statistics for the second scenario. The third scenario is treated by a stepwise procedure, where we detect and correct intervention effects iteratively. The usefulness of the proposed methods is illustrated using simulated and real data examples.  相似文献   

14.
Batch-reactor input profiles are normally obtained under the assumption of knowledge of a parametric model. When this assumption does not hold, parameter deviation from the nominal value can severely impair performance of the nominal optimization. The minimax optimization offers an allernative that accounts for parameteric uncertainty, but its inherent worst-case assumption degrades its performance near the nominal parameter value compared to that of the nominal optimization. This work presents a new optimization procedure that offers robustness similar to the minimax optimization while retaining nominal performance similar to the nominal optimization. Given a probability distribution for the uncertain process parameters from a previous identification step, the method optimizes the expectation of cost function for the entire parameter space instead of optimizing the cost function for the expectation of the parameters. In this way increased robustness towards uncertain or time-varying parameters is obtained without unduly compromising the nominal performance. Since evaluation of the proposed objective function involves numerical integration, an efficient strategy is presented for obtaining its exact gradient indirectly, thereby rendering the method numerically reliable and computationally attractive, and less demanding than the minimax approach, in general. For the special case where the nominal parameter value is assigned probability one, the method reduces to the well-known control vector iteration procedure, a numerical optimization strategy based on Pontryagin's maximum principle. Two examples demonstrate the advantage of the new method over the conventional approaches, namely, nominal optimization, minimax approach and local-sensitivity-based approach.  相似文献   

15.
Subspace identification methods for bilinear systems perform computation with data matrix exploding. Huge computational burdens have been the biggest problem that prohibits real applications of bilinear subspace identification. In this paper, we propose a novel approach with the identification of bilinear predictor model from input-output data with enhanced computational efficiency. Based on the displacement structure theory, the QR factorization is replaced with a fast Cholesky factorization, which deals with the curse of huge dimensionality and therefore reduces the computation cost. These improvements make the bilinear subspace approach more computationally efficient with good prediction ability. Finally, the proposed control approach is illustrated with a simulation of the non-linear continuously stirred tank reactor (CSTR) system.  相似文献   

16.
Fast Filtering and Smoothing for Multivariate State Space Models   总被引:1,自引:0,他引:1  
This paper investigates a new approach to diffuse filtering and smoothing for multivariate state space models. The standard approach treats the observations as vectors, while our approach treats each element of the observational vector individually. This strategy leads to computationally efficient methods for multivariate filtering and smoothing. Also, the treatment of the diffuse initial state vector in multivariate models is much simpler than in existing methods. The paper presents details of relevant algorithms for filtering, prediction and smoothing. Proofs are provided. Three examples of multivariate models in statistics and economics are presented for which the new approach is particularly relevant.  相似文献   

17.
Daily some millions barrels of oil are moved around the world in imports and exports and domestically within countries. While ships are the main mode for intercontinental transport, pipelines are the chief form of transcontinental transport, while regional and local transports is performed by trains and trucks. Despite high installation costs, pipelines are considered highly efficient as a mode for transporting large amounts of oil and oil products over long distances, because they offer lower operation costs, higher reliability rates, lower product loss rates, less environmental impact, and less susceptibility to adverse weather conditions than other modes. This study deals with a multi-product pipeline system that transports a set of oil products (diesel, gasoline and kerosene, for example), which have to be moved from points (operating areas) where they are produced or stored (refineries, terminals) to points where they are needed (other refineries, distribution centers, terminals, ports, customers) through a pipeline or set of pipelines.The present study contributes primarily by offering an efficient tool for the problem of scheduling multi-product pipeline networks. The methodology proposed takes the approach of discretizing both pipelines and planning horizon and combines an efficient MILP model with a post-processing heuristic. When compared with previous models, we propose a more efficient one in which the set of volumetric constraints is modeled in the form of knapsack cascading constraints and constraints on products in pipeline sections, which made for significantly improved performance in the experiments that were conducted. The proposed methodology thus constitutes an advance in terms of modeling the problem, making it feasible to solve problems increasingly close to the realities confronting oil industry operators.  相似文献   

18.
Model‐based experiment design techniques are an effective tool for the rapid development and assessment of dynamic deterministic models, yielding the most informative process data to be used for the estimation of the process model parameters. A particular advantage of the model‐based approach is that it permits the definition of a set of constraints on the experiment design variables and on the predicted responses. However, uncertainty in the model parameters can lead the constrained design procedure to predict experiments that turn out to be, in practice, suboptimal, thus decreasing the effectiveness of the experiment design session. Additionally, in the presence of parametric mismatch, the feasibility constraints may well turn out to be violated when that optimally designed experiment is performed, leading in the best case to less informative data sets or, in the worst case, to an infeasible or unsafe experiment. In this article, a general methodology is proposed to formulate and solve the experiment design problem by explicitly taking into account the presence of parametric uncertainty, so as to ensure both feasibility and optimality of the planned experiment. A prediction of the system responses for the given parameter distribution is used to evaluate and update suitable backoffs from the nominal constraints, which are used in the design session to keep the system within a feasible region with specified probability. This approach is particularly useful when designing optimal experiments starting from limited preliminary knowledge of the parameter set, with great improvement in terms of design efficiency and flexibility of the overall iterative model development scheme. The effectiveness of the proposed methodology is demonstrated and discussed by simulation through two illustrative case studies concerning the parameter identification of physiological models related to diabetes and cancer care. © 2009 American Institute of Chemical Engineers AIChE J, 2010  相似文献   

19.
Distributed or networked model predictive control (MPC) can provide a computationally efficient approach that achieves high levels of performance for plantwide control, where the interactions between processes can be determined from the information exchanged among controllers. Distributed controllers may exchange information at a lower rate to reduce the communication burden. A dissipativity‐based analysis is developed to study the effects of low communication rates on plantwide control performance and stability. A distributed dissipativity‐based MPC design approach is also developed to guarantee the plantwide stability and minimum plantwide performance with low communication rates. These results are illustrated by a case study of a reactor‐distillation column network. © 2015 American Institute of Chemical Engineers AIChE J, 61: 3288–3303, 2015  相似文献   

20.
An effective model for predicting multicomponent aerosol evaporation in the upper respiratory system that is capable of estimating the vaporization of individual components is needed for accurate dosimetry and toxicology analyses. In this study, the performance of evaporation models for multicomponent droplets over a range of volatilities is evaluated based on comparisons to available experimental results for conditions similar to aerosols in the upper respiratory tract. Models considered include a semiempirical correlation approach as well as resolved-volume computational simulations of single and multicomponent aerosol evaporations to test the effects of variable gas-phase properties, surface blowing velocity, and internal droplet temperature gradients. Of the parameters assessed, concentration-dependent gas-phase specific heat had the largest effect on evaporation and should be taken into consideration for respiratory aerosols that contain high volatility species, such as n-heptane, at significant concentrations. For heavier droplet components or conditions below body temperatures, semiempirical estimates were shown to be appropriate for respiratory aerosol conditions. In order to reduce the number of equations and properties required for complex mixtures, a resolved-volume evaporation model was used to identify a twelve-component surrogate representation of potentially toxic JP-8 fuel based on comparisons to experimentally reported droplet evaporation data. Due to the relatively slow evaporation rate of JP-8 aerosols, results indicate that a semiempirical evaporation model in conjunction with the identified surrogate mixture provide a computationally efficient method for computing droplet evaporation that can track individual toxic markers. However, semiempirical methodologies are in need of further development to effectively compute the evaporation of other higher volatility aerosols for which variable gas-phase specific heat does play a significant role.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号