首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We discuss certain basic features of the equation-free (EF) approach to modeling and computation for complex/multiscale systems. We focus on links between the equation-free approach and tools from systems and control theory (design of experiments, data analysis, estimation, identification and feedback). As our illustrative example, we choose a specific numerical task (the detection of stability boundaries in parameter space) for stochastic models of two simplified heterogeneous catalytic reaction mechanisms. In the equation-free framework the stochastic simulator is treated as an experiment (albeit a computational one). Short bursts of fine scale simulation (short computational experiments) are designed, executed, and their outputs processed and fed back to the process, in integrated protocols aimed at performing the particular coarse-grained task (the detection of a macroscopic instability). Two distinct approaches are presented; one is a direct translation of our previous protocol for adaptive detection of instabilities in laboratory experiments [Rico-Martinez, R., Krisher, K., Flatgen, G., Anderson, J. S., & Kevrekidis, I. G. (2003). Adaptive detection of instabilities: An experimental feasibility study. Physica D, 176, 1–18]; the second approach is motivated from numerical bifurcation algorithms for critical point detection. A comparison of the two approaches brings forth a key feature of equation-free computation: computational experiments can be easily initialized at will, in contrast to laboratory ones.  相似文献   

2.
This work provides a framework for linear model predictive control (MPC) of nonlinear distributed parameter systems (DPS), allowing the direct utilization of existing large‐scale simulators. The proposed scheme is adaptive and it is based on successive local linearizations of the nonlinear model of the system at hand around the current state and on the use of the resulting local linear models for MPC. At every timestep, not only the future control moves are updated but also the model of the system itself. A model reduction technique is integrated within this methodology to reduce the computational cost of this procedure. It follows the equation‐free approach (see Kevrekidis et al., Commun Math Sci. 2003;1:715–762; Theodoropoulos et al., Proc Natl Acad Sci USA. 2000;97:9840‐9843), according to which the equations of the model (and consequently of the simulator) need not be given explicitly to the controller. The latter forms a “wrapper” around an existing simulator using it in an input/output fashion. This algorithm is designed for dissipative DPS, dissipativity being a prerequisite for model reduction. The equation‐free approach renders the proposed algorithm appropriate for multiscale systems and enables it to handle large‐scale systems. © 2011 American Institute of Chemical Engineers AIChE J, 2012  相似文献   

3.
4.
To respond to the changing needs of the chemical and related industries in order both to meet today's economy demands and to remain competitive in global trade, a modern chemical engineering is vital to satisfy both the market requirements for specific nano and microscale end-use properties of products, and the social and environmental constraints of industrial meso and macroscale processes. Thus an integrated system approach of complex multidisciplinary, non-linear, non-equilibrium processes and phenomena occurring on different length and time scales of the supply chain is required. That is, a good understanding of how phenomena at a smaller length-scale relates to properties and behaviour at a longer length-scale is necessary (from the molecular-scale to the production-scales). This has been defined as the triplet “molecular Processes-Product-Process (3PE)” integrated multiscale approach of chemical engineering. Indeed a modern chemical engineering can be summarized by four main objectives: (1) Increase productivity and selectivity through intensification of intelligent operations and a multiscale approach to processes control: nano and micro-tailoring of materials with controlled structure. (2) Design novel equipment based on scientific principles and new production methods: process intensification using multifunctional reactors and micro-engineering for micro structured equipment. (3) Manufacturing end-use properties to synthesize structured products, combining several functions required by the customer with a special emphasis on complex fluids and solid technology, necessating molecular modeling, polymorph prediction and sensor development. (4) Implement multiscale application of computational chemical engineering modeling and simulation to real-life situations from the molecular-scale to the production-scale, e.g., in order to understand how phenomena at a smaller length-scale relate to properties and behaviour at a longer length-scale. The presentation will emphasize the 3PE multiscale approach of chemical engineering for investigations in the previous objectives and on its success due to the today's considerable progress in the use of scientific instrumentation, in modeling, simulation and computer-aided tools, and in the systematic design methods.  相似文献   

5.
We demonstrate a “coarse time-stepper” based approach for the extraction of continuum-level stability and bifurcation information from kinetic theory based, LB simulations. Acting directly on the LB simulator, we sidestep the necessity of deriving macroscopic, explicitly closed continuum conservation equations. The approach is used to analyze the dynamics and the oscillatory instability of two-dimensional periodic arrays of gas bubbles rising in a liquid.  相似文献   

6.
The one-dimensional dispersion model has been solved analytically as well as numerically to describe flow in continuous “closed” boundary systems using the celebrated Danckwerts boundary conditions. Nevertheless, a continuous state stochastic approach can sometimes be more appropriate especially in cases when input fluctuations are of the same order as the time scale of the system and in such cases an accurate treatment of the boundary conditions is indispensable for the successful application of the method. A deterministic approach was carried out in which the differential equation was solved using Fourier's method and the Laplace transform. These solutions were used as a yardstick to assess the precision of the stochastic solution with its proposed boundary conditions conforming to Danckwerts’ boundary conditions. Our problem is somehow simplified if we assume that the convection term and the dispersion term are constants independent of space and time. A stochastic differential equation was thus employed, governed by the Wiener process and solved using the Euler-Maruyama method.  相似文献   

7.
For analysis, design and model-based control of crystallization processes typically population balance models or reduced models derived therefrom are used. Usually the kinetic parameters in these models are determined analyzing measured concentration trajectories and/or particle size distributions using parameter estimation procedures. In the case of preferential crystallization of enantiomers the analysis of experiments is complex since there are two “competing” crystal populations. In this field often batch processes are performed using seeds of the desired enantiomer. Currently, it is in particular challenging to quantify and optimize a new concept: e.g. the so-called “auto seeded programmed polythermal preferential crystallization” (“as3pc” [Coquerel, G., Petit, M.-N., Bouaziz, R., 2000. Method of resolution of two enantiomers by crystallization. United States Patent, Patent number: 6,022,409]). In order to design and optimize this process the temperature dependent kinetic constants for crystal growth, nucleation and dissolution have to be known. In this work a reduced model for this auto seeded process is presented. The general identifiability of the model parameters is investigated along with some suggestions on how to reparameterize the kinetic terms involved. The values of the identified key parameters are estimated using conventional least square optimization using experimental data determined for the model system threonine/water. Parameter confidence and cross correlation are discussed and finally the model is validated using experiments not used for parameter estimation.  相似文献   

8.
The problem of optimal time-constant and time-varying operation for transport-reaction processes is considered, when the cost functional and/or equality constraints necessitate the consideration of phenomena that occur over disparate length scales. Multiscale process models are initially developed, linking continuum conservation laws with microscopic scale simulators. Subsequently, order reduction techniques for dissipative partial-differential equations are combined with adaptive tabulation of microscopic simulation data to reduce the computational requirements of the optimization problem, which is then solved using standard search algorithms. The method is applied to a conceptual thin film deposition process to compute optimal substrate-surface temperature profiles that simultaneously maximize film-deposition-rate uniformity (macroscale objective) and minimize surface roughness (microscale objective) across the film surface for a steady-state process operation. Subsequently, optimal time-varying policies of substrate temperature and precursor inlet concentrations are computed under the assumption of quasi-steady-state process operation.  相似文献   

9.
This work proposes a multiscale modeling and model-based feedback control framework for the delignification process in a batch-type pulp digester. Specifically, we focus on a hardwood chip in the digester and develop a multiscale model capturing both the evolution of microscopic properties such as the pore size and shape distributions in the solid phase and the dynamic changes in the temperature and component concentrations in the liquor phase. While the macroscopic model adopts the continuum hypothesis based on the Purdue model, a novel microscopic model is developed using a kinetic Monte Carlo algorithm, accounting for the dissolution of lignin, cellulose, and hemicellulose contacting the liquor phase. A reduced-order model was built to design a Luenberger observer for state estimation, which is then used to develop a model-based control system. The simulation results demonstrated that the proposed methodology was able to regulate both the Kappa number and porosity to desired values.  相似文献   

10.
Transport limited heterogeneous reactions with asymmetric transport rates in the non-reacting phase can exhibit an interesting switch in the concentrations of the reactants in the reacting phase from one limiting reactant to the other. This switch, called “cross-over” [Mchedlov-Petrossyan P.O., Khomenko G., Zimmerman W.B., 2003a. Nearly irreversible, fast heterogeneous reactions in premixed flow. Chemical Engineering Science 58, 3005-3023; Mchedlov-Petrossyan P.O., Zimmerman W.B., Khomenko G.A., 2003b. Fast binary reactions in a heterogeneous catalytic batch reactor. Chemical Engineering Science 58, 2691-2703], relates to the optimum design of the tubular reactor as all the reactants in the reacting phase are completely consumed at cross-over. The cross-over phenomenon, which has been studied by a number of researchers using phenomenological modelling, is investigated here by developing a distributed model using level-set simulations, in order to explore the possibility of the existence of cross-over in the frame of reference of a moving droplet. Cross-over occurs for a droplet moving due to buoyancy with asymmetric transfer rates of the reactants in the non-reacting phase and an instantaneous reaction occurring inside the droplet (reacting phase). The cross-over length obtained using the level-set simulation is found to be within 0.7-8% of that obtained using the phenomenological model. Computational experiments are performed by varying the ratios of the initial concentrations of the reactants and the transfer rates of the reactants, in order to obtain the parametric region for the existence of cross-over which is also compared with the theoretical prediction.  相似文献   

11.
Ying Jin  Qi Liu 《Fuel》2011,90(8):2592-2597
The aggregation of micron-sized silica particles in non-aqueous (i.e. hydrocarbon) media was examined on both the macroscopic and microscopic scales. The silica surfaces were either “clean” or “treated” (i.e. with irreversibly adsorbed materials from Athabasca bitumen); the hydrocarbons were mixtures of toluene and heptane at various ratios (to allow for different degrees of “aromaticity” in the solvent). On the macroscopic scale, gravity settling of the silica beads in non-aqueous media was monitored, and particle-particle interactions were characterized semi-empirically by the initial rates of sedimentation. On the microscopic scale, adhesive forces between individual glass spheres were directly measured using the microcantilever technique (again, in non-aqueous liquids). It was found that, for clean silica spheres, the settling rates of the suspensions were relatively insensitive to the interparticle adhesive forces. This is in contrast to the case for treated silica beads, where strong correlation was observed between the settling rate and particle-particle adhesion. These findings may have important relevance to the commercial “paraffinic froth treatment” process.  相似文献   

12.
In this paper, we propose the first numerical study of nanotube-based torsional oscillators via developing a new multiscale model. The edge-to-edge technique was employed in this multiscale method to couple the molecular model, i.e., nanotubes, and the continuum model, i.e., the metal paddle. Without losing accuracy, the metal paddle was treated as the rigid body in the continuum model. Torsional oscillators containing (10,0) nanotubes were mainly studied. We considered various initial angles of twist to depict linear/nonlinear characteristics of torsional oscillators. Furthermore, effects of vacancy defects and temperature on mechanisms of nanotube-based torsional oscillators were discussed.  相似文献   

13.
Multiphase bioceramics based on wollastonite and wollastonite/hydroxylapatite (W/HAp) have been successfully prepared by the heat treatment of a filler-containing preceramic polymer. CaO-bearing precursors (Ca-carbonate, Ca-acetate, and CaO nano-particles) were dispersed in a solution of silicone resin, subsequently dried and pyrolysed in nitrogen. The reaction between silica, deriving from the oxycarbide (SiOC) residue of the silicone resin, and CaO “active filler” led to the formation of several calcium silicates, mainly consisting of wollastonite (CaSiO3), in both low and high temperature forms. The phase assemblage of the final ceramic varied with the pyrolysis temperature (varying from 1000 to 1200 °C). HAp was additionally inserted, as “passive filler” (i.e. not reacting with SiOC), for the preparation of bioceramics based on W/HAp mixtures.  相似文献   

14.
Quantitative methods for individualizing and optimizing the dosage regimen and clinically monitoring each patient are desirable to insure that each patient can obtain effective therapeutic benefit while minimizing undesirable side effects. This is of special concern for medicines that are expensive or whose toxic side effects are severe (e.g., oncological agents). The optimal dosage regimen for an individual is a combination of dose amount and/or dosing interval (i.e., time between doses) which minimizes the risk that the drug exposure deviates from the desired therapeutic window. The therapeutic window is defined as the range of drug exposure (e.g., blood concentration, area under the curve concentration‐time) which is below a threshold defining an acceptable toxic side effect and above a threshold defining a minimum acceptable level of therapeutic efficacy. In this work, the dosage regimen optimization problem defined in terms of general pharmacometric models (i.e., described by differential‐algebraic equations) is presented and a solution approach outlined which uses a scenario‐based stochastic optimization formulation that minimizes a risk metric. The scenarios are derived from the posterior joint probability distribution of the individual's pharmacometric parameters which is obtained following an approximate Bayesian inference approach. A Smolyak rule is used for the selection of the scenarios (i.e., combination of pharmacometric parameters) to be considered and for computing the approximation to the risk metric. Two case studies, gabapentin and cyclophosphamide, are presented to elucidate the advantages and limitations of the proposed approach. The numerical results demonstrate that low risk optimal solutions can be generated via the proposed stochastic optimization; while significantly reducing the computational burden in comparison with the conventional Markov chain Monte Carlo—grid search approach. This partially alleviates implementation issues preventing the deployment of dosage regimen individualization in clinical practice. Since stochastic optimization has been extensively used in other domains, the approach for uncertainty characterization proposed in this work may have general relevance beyond the pharmacometrics domain. © 2013 American Institute of Chemical Engineers AIChE J, 59: 3296–3307, 2013  相似文献   

15.
不确定条件下炼化企业计划与调度整合策略   总被引:2,自引:1,他引:2       下载免费PDF全文
A strategy for the integration of production planning and scheduling in refineries is proposed.This strategy relies on rolling horizon strategy and a two-level decomposition strategy.This strategy involves an upper level multiperiod mixed integer linear programming(MILP) model and a lower level simulation system,which is extended from our previous framework for short-term scheduling problems [Luo,C.P.,Rong,G.,"Hierarchical approach for short-term scheduling in refineries",Ind.Eng.Chem.Res.,46,3656-3668(2007)].The main purpose of this extended framework is to reduce the number of variables and the size of the optimization model and,to quickly find the optimal solution for the integrated planning/scheduling problem in refineries.Uncertainties are also considered in this article.An integrated robust optimization approach is introduced to cope with uncertain parameters with both continuous and discrete probability distribution.  相似文献   

16.
Polymer nanocomposites have a great potential to be a dominant coating material in a wide range of applications in the automotive,aerospace,ship-making,construction,and pharmaceutical industries.However,how to realize design sustainability of this type of nanostructured materials and how to ensure the true optimality of the product quality and process performance in coating manufacturing remain as a mountaintop area.The major challenges arise from the intrinsic multiscale nature of the material-process-product system and the need to manipulate the high levels of complexity and uncertainty in design and manufacturing processes.In this work,the challenging objectives of sustainable design and manufacturing are simultaneously accomplished by resorting to multiscale systems theory and engineering sustainability principles.The principal idea is to achieve exceptional system performance through concurrent characterization and optimization of materials,product and associated manufacturing processes covering a wide range of length and time scales.Multiscale modeling and simulation techniques ranging from microscopic molecular modeling to classical continuum modeling are seamlessly coupled.The integration of different methods and theories at individual scales allows the quantitative prediction of macroscopic system performance from the fundamental molecular behavior.Furthermore,mathematically rigorous and methodologically viable approaches are pursued to achieve sustainability-goal-oriented design of material-process-product systems.The introduced methodology can greatly facilitate experimentalists in novel material invention and new knowledge discovery.At the same time,it can provide scientific guidance and reveal various new opportunities and effective strategies for achieving sustainable manufacturing.The methodological attractiveness will be fully demonstrated by a detailed case study on the design of thermoset nanocomposite coatings.  相似文献   

17.
This work is the logical following of our previous work [Devatine, A., Chiciuc, I., Poupot, C., Mietton-Peuchot, M., 2007. Micro-oxygenation of wine in presence of dissolved carbon dioxide. Chemical Engineering Science 62, 4579-4588] about micro-oxygenation of wine, where, when dissolved carbon dioxide was present in the liquid phase, a surprising decrease in the value of the apparent kLa was pointed out. Only qualitative explanation was given, and no modelling was proposed. Here, we attempted to fill this gap, using very simple equations. Especially, the rising bubble velocity was assumed to follow the Stokes law, and no interaction between rising bubbles was considered. By making the necessary simplifications, analytical solutions to the set of equations are proposed and simple-to-use expressions for the oxygen transfer yield are established. From them, importance of the ratio “column height” to “diffuser pore diameter” is clearly seen. Comparison with our previous experimental results is also done and validates the prominent role of the “dilution effect” inside the bubble in respect to the observed decrease in the apparent kLa.  相似文献   

18.
Systematic methodologies for the optimal location of spatial measurements, for efficient estimation of parameters of distributed systems, are investigated. A review of relevant methods in the literature is presented, and a comparison between the results obtained with three distinctive existing techniques is given. In addition, a new approach based on the Proper Orthogonal Decomposition (POD), to address this important problem is introduced and discussed with the aid of illustrative benchmark case studies from the literature. Based on the results obtained here, it was observed that the method based on the Gram determinant evolution (Vande Wouwer et al., 2000), does not always produce accurate results. It is strongly dependent on the behaviour of sensitivity coefficients and requires extensive calculations. The method based on max-min optimisation (Alonso, Kevrekidis, Banga, & Frouzakis, 2004) assigns optimal sensor locations to the positions where system outputs reach their extrema values; however, in some cases it produces more than one optimal solution. The D-optimal design method, Uciński (2003, June 18-20), produces as results the optimal number and spatial positions of measurements based on the behaviour (rather than the magnitude) of the sensitivity functions. Here we show that the extrema values of POD modes can be used directly to compute optimal sensor locations (as opposed e.g. to Alonso, Kevrekidis, et al., 2004, where PODs are merely used to reduce the system and further calculations are needed to compute sensor locations). Furthermore, we demonstrate the equivalence between the extrema of POD modes and of sensitivity functions. The added value of directly using PODs for the computation of optimal sensor locations is the computational efficiency of the method, side-stepping the tedious computation of sensitivity coefficient Jacobian matrices and using only system responses and/or experimental results directly. Furthermore, the inherent combination of model reduction and sensor location estimation in this method becomes more important as the complexity of the original distributed parameter system increases.  相似文献   

19.
A vehicle testing programme has been designed in order to calibrate and validate an empirical evaporative emissions model developed in previous work. To this aim, a large number of “targeted” tests have been performed on four vehicles covering a wide range of the model input parameters such as fuel volatility, ambient temperature, fuel tank and carbon canister size, fuel system materials. The fair agreement between modelled and measured values demonstrates that “bottom-up” modelling work and “top-down” vehicle testing may be combined to predict evaporative emissions on a vehicle level.  相似文献   

20.
Very often, there are several kinds of different scale structures, such as atomic/molecular microscale, device/boundary mesoscale and working procedure/operation unit macroscale, etc., referred in the operation of a blast furnace system. However, not enough attention has been paid to the multiscale feature of the blast furnace system by traditional analytic methods which are the average ones over a fixed scale. For this reason, the current work performs a multiscale identification and further makes a dynamical analysis on the blast furnace system from the time series sets of two important components in the blast furnace hot metal, i.e., silicon content and sulfur content, collected from a pint‐sized blast furnace. The results render a strong indication of multiscale characteristic and multiple dynamics, i.e., randomicity, chaos, and limit cycle but with different rates of contribution to the whole system, contained in the blast furnace system. Furthermore, compared with the original blast furnace system, every subscale structure has lower complexity. All of these can serve as guidelines to carry out modeling, control and optimization on complex blast furnace system from the viewpoint of multiscale in the future, and may throw more light on understanding and characterizing its complex dynamics. © 2011 American Institute of Chemical Engineers AIChE J, 2011  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号