首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Causal explanation and empirical prediction are usually addressed separately when modelling ecological systems. This potentially leads to erroneous conflation of model explanatory and predictive power, to predictive models that lack ecological interpretability, or to limited feedback between predictive modelling and theory development. These are fundamental challenges to appropriate statistical and scientific use of ecological models. To help address such challenges, we propose a novel, integrated modelling framework which couples explanatory modelling for causal understanding and input variable selection with a machine learning approach for empirical prediction. Exemplar datasets from the field of freshwater ecology are used to develop and evaluate the framework, based on 267 stream and river monitoring stations across England, UK. These data describe spatial patterns in benthic macroinvertebrate community indices that are hypothesised to be driven by meso-scale physical and chemical habitat conditions. Whilst explanatory models developed using structural equation modelling performed strongly (r2 for two macroinvertebrate indices = 0.64–0.70), predictive models based on extremely randomised trees demonstrated moderate performance (r2 for the same indices = 0.50–0.61). However, through coupling explanatory and predictive components, our proposed framework yields ecologically-interpretable predictive models which also maintain the parsimony and accuracy of models based on solely predictive approaches. This significantly enhances the opportunity for feedback among causal theory, empirical data and prediction within environmental modelling.  相似文献   

2.
This paper develops a modelling methodology for studying relations defined a priori in two dimensions, z = u(x, y), for which experimental data are known. An innovation is that the methodology is based on bi-dimensional finite element techniques, in which the model’s value consists of a finite number of points, making it possible to obtain its value at any point. Its application permits obtaining representational models of the relation. A computational algorithm is presented; its program has been called Finit Trap 2D, which generates families of models. It has defined criteria for model selection based on information parameters.This scientific research technique complements those existing in scientific literature for generating mathematical models based on experimental data [Cortés M, Villacampa Y, Mateu J, Usó, JL. A new methodology for modelling highly structured systems. Environmental modelling and software 2000;15(5):461–70, S-PLUS 2000. Guide to statistics, vosl. 1–2. Mathsoft Inc.; 1999, SPSS 12.0. Guide to statistics, Mathsoft Inc.; 2003, Verdu F. Un algoritmo para la construcción múltiple de modelos matemáticos no lineales y el estudio de su estabilidad, Thesis Doctoral, Universidad de Alicante; 2001, Verdu F, Villacampa Y. A computational algorithm for the multiple generation of nonlinear mathematical models and stability study. Advances in Engineering Software 2008;39(5):430–7, Villacampa Y, Cortés M, Vives F, Usó JL, Castro MA. A new computational algorithm to construct mathematical models. In: Ecosystems and sustainable development II. Advances in ecological sciences, vol. 2, WIT Press: Southampton, Boston; 1999], and will be useful in the effort to simplify the model.It should be emphasised that representational models have been generated from bi-dimensional finite element models. This will naturally lead to their future application in processes described by partial differential equation whose coefficients are functions, A(x, y) for which only experimental data are known.  相似文献   

3.
4.
Many modelling techniques tend to address “late-phase” requirements while many critical modelling decisions (such as determining the main goals of the system, how the stakeholders depend on each other, and what alternatives exist) are taken during early-phase requirements engineering. The i1 modelling framework is a semiformal agent-oriented conceptual modelling language that is well-suited for answering these questions. This paper addresses key challenge faced in the practical deployment of agent-oriented conceptual modelling frameworks such as i1. Our approach to addressing this problem is based on the observation that the value of conceptual modelling in the i1 framework lies in its use as a notation complementary to existing requirements modelling and specification languages, i.e., the expressive power of i1 complements rather than supplants that of existing notations. The use of i1 in this fashion requires that we define methodologies that support the co-evolution of i1 models with more traditional specifications. This research examines how this might be done with formal specification notations (specifically Z).  相似文献   

5.
Augmented reality (AR) technologies are just being used as interface in CAD tools allowing the user to perceive 3D models over a real environment. The influence of the use of AR in the conceptualization of products whose configuration, shape and dimensions depend mainly on the context remains unexplored. We aimed to prove that modelling in AR environments allows to use the context in real-time as an information input for making the iterative design process more efficient. In order to prove that, we developed a tool called AIR-MODELLING in which the designer is able to create virtual conceptual products by hand gestures meanwhile he/she is interacting directly with the real scenario. We conducted a test for comparing designers’ performance using AIR-MODELLING and a traditional CAD system. We obtained an average reduction of 44% on the modeling time in 76% of the cases. We found that modelling in AR environments using the hands as interface allows the designer to quickly and efficiently conceptualize potential solutions using the spatial restrictions of the context as an information input in real-time. Additionally, modelling in a natural scale, directly over the real scene, prevents the designer from drawing his/her attention on dimensional details and allows him/her to focus on the product itself and its relation with the environment.  相似文献   

6.
L2 and L1 optimal linear time-invariant (LTI) approximation of discrete-time nonlinear systems, such as nonlinear finite impulse response (NFIR) systems, is studied via a signal distribution theory motivated approach. The use of a signal distribution theoretic framework facilitates the formulation and analysis of many system modelling problems, including system identification problems. Specifically, a very explicit solution to the L2 (least squares) LTI approximation problem for NFIR systems is obtained in this manner. Furthermore, the L1 (least absolute deviations) LTI approximation problem for NFIR systems is essentially reduced to a linear programming problem. Active LTI modelling emphasizes model quality based on the intended use of the models in linear controller design. Robust stability and LTI approximation concepts are studied here in a nonlinear systems context. Numerical examples are given illustrating the performance of the least squares (LS) method and the least absolute deviations (LAD) method with LTI models against nonlinear unmodelled dynamics.  相似文献   

7.
Optimal squared error and absolute error-based approximation problems for static polynomial models of nonlinear, discrete-time, systems are studied in detail. These problems have many similarities with other linear-in-the-parameters approximation problems, such as with optimal approximation problems for linear time-invariant models of linear and nonlinear systems. Nonprobabilistic signal analysis is used.Close connections between the studied approximation problems and certain classical topics in approximation theory, such as optimal L2(−1,1) and L1(−1,1) approximation, are established by analysing conditions under which sample averages of static nonlinear functions of the input converge to appropriate Riemann integrals of the static functions. These results should play a significant role in the analysis of corresponding system identification and model validation problems. Furthermore, these results demonstrate that optimal modelling based on the absolute error can offer advantages over squared error-based modelling. Especially, modelling problems in which some signals possess heavy tails can benefit from absolute value-based signal and error analysis.  相似文献   

8.
This paper introduces a graphical, computer aided modelling methodology that is particularly suited for the concurrent design of multidisciplinary systems, viz. of engineering systems with mechanical, electrical, hydraulic or pneumatic components, including interactions of physical effects from various energy domains.Following the introduction, bond graph modelling of multibody systems, as an example of an advanced topic, is briefly addressed in order to demonstrate the potential of this powerful approach to modelling multidisciplinary systems. It is shown how models of multibody systems including flexible bodies can be built in a systematic manner.  相似文献   

9.
10.
11.
Stochastic differential equations (SDEs) are established tools for modeling physical phenomena whose dynamics are affected by random noise. By estimating parameters of an SDE, intrinsic randomness of a system around its drift can be identified and separated from the drift itself. When it is of interest to model dynamics within a given population, i.e. to model simultaneously the performance of several experiments or subjects, mixed-effects modelling allows for the distinction of between and within experiment variability. A framework for modeling dynamics within a population using SDEs is proposed, representing simultaneously several sources of variation: variability between experiments using a mixed-effects approach and stochasticity in the individual dynamics, using SDEs. These stochastic differential mixed-effects models have applications in e.g. pharmacokinetics/pharmacodynamics and biomedical modelling. A parameter estimation method is proposed and computational guidelines for an efficient implementation are given. Finally the method is evaluated using simulations from standard models like the two-dimensional Ornstein-Uhlenbeck (OU) and the square root models.  相似文献   

12.
Early detection in water evaporative installations is one of the keys to fighting against the bacterium Legionella, the main cause of Legionnaire’s disease. This paper discusses the general structure, elements and operation of a probabilistic expert system capable of predicting the risk of Legionella in real time from remote information relating to the quality of the water in evaporative installations.The expert system has a master–slave architecture. The slave is a control panel in the installation at risk containing multi-sensors which continuously provide measurements of chemical and physical variables continuously. The master is a net server which is responsible for communicating with the control panel and is in charge of storing the information received, processing the data through the environment R and publishing the results in a web server.The inference engine of the expert system is constructed through Bayesian networks, which are very useful and powerful models that put together probabilistic reasoning and graphical modelling. Bayesian reasoning and Markov Chain Monte Carlo algorithms are applied in order to study the relevant unknown quantities involved in the parametric learning and propagation of evidence phases.  相似文献   

13.
We describe an approach to machine learning from numerical data that combines both qualitative and numerical learning. This approach is carried out in two stages: (1) induction of a qualitative model from numerical examples of the behaviour of a physical system, and (2) induction of a numerical regression function that both respects the qualitative constraints and fits the training data numerically. We call this approach Q2 learning, which stands for Qualitatively faithful Quantitative learning. Induced numerical models are “qualitatively faithful” in the sense that they respect qualitative trends in the learning data. Advantages of Q2 learning are that the induced qualitative model enables a (possibly causal) explanation of relations among the variables in the modelled system, and that numerical predictions are guaranteed to be qualitatively consistent with the qualitative model which alleviates the interpretation of the predictions. Moreover, as we show experimentally the qualitative model's guidance of the quantitative modelling process leads to predictions that may be considerably more accurate than those obtained by state-of-the-art numerical learning methods. The experiments include an application of Q2 learning to the identification of a car wheel suspension system—a complex, industrially relevant mechanical system.  相似文献   

14.
This study presents a complete advanced control structure aimed at the optimal and most efficient energy management for a Grid-Connected Hybrid Power plant. This control scheme is composed of process supervision and process control layers, and it is a possible technology to enable improvements in the energy consumption of industrial systems subject to constraints and process demands. The proposed structure consists of the combination of a Model-Based Predictive Controller, formulated within the Chance Constraints framework to deal with stochastic disturbances (renewable sources, as solar irradiance), an optimal finite-state machine decision system and the use of disturbance estimation techniques for the prediction of renewable sources. The predictive controller uses feedforward compensation of estimated future disturbances, obtained by the use of Nonlinear Auto-Regressive Neural Networks with time delays. The proposed controller aims to perform the management of which energy system to use and to decide where to store energy between multiple storage options. This has to be done while always maximizing the use of renewable energy and optimizing energy generation due to contract rules (maintain maximal economic profit). The proposed method is applied to a case study of energy generation in a sugar cane power plant, with non-dispatchable renewable sources (such as photovoltaic and wind power generation), as well as dispatchable sources (as biomass and biogas). This hybrid power system is subject to operational constraints, as to produce steam in different pressures, sustain internal demands and, imperiously, produce and maintain an amount of electric power throughout each month, defined by strict contract rules with a local Distribution Network Operator (DNO). This paper aims to justify the use of this novel approach to optimal energy generation in hybrid microgrids through simulation, illustrating the performance improvement for different cases.  相似文献   

15.
16.
Similarity Joins are extensively used in multiple application domains and are recognized among the most useful data processing and analysis operations. They retrieve all data pairs whose distances are smaller than a predefined threshold ε. While several standalone implementations have been proposed, very little work has addressed the implementation of Similarity Joins as physical database operators. In this paper, we focus on the study, design, implementation, and optimization of a Similarity Join database operator for metric spaces. We present DBSimJoin, a physical database operator that integrates techniques to: enable a non-blocking behavior, prioritize the early generation of results, and fully support the database iterator interface. The proposed operator can be used with multiple distance functions and data types. We describe the changes in each query engine module to implement DBSimJoin and provide details of our implementation in PostgreSQL. We also study ways in which DBSimJoin can be combined with other similarity and non-similarity operators to answer more complex queries, and how DBSimJoin can be used in query transformation rules to improve query performance. The extensive performance evaluation shows that DBSimJoin significantly outperforms alternative approaches and scales very well when important parameters like ε, data size, and number of dimensions increase.  相似文献   

17.
Natural languages are known for their expressive richness. Many sentences can be used to represent the same underlying meaning. Only modelling the observed surface word sequence can result in poor context coverage and generalization, for example, when using n-gram language models (LMs). This paper proposes a novel form of language model, the paraphrastic LM, that addresses these issues. A phrase level paraphrase model statistically learned from standard text data with no semantic annotation is used to generate multiple paraphrase variants. LM probabilities are then estimated by maximizing their marginal probability. Multi-level language models estimated at both the word level and the phrase level are combined. An efficient weighted finite state transducer (WFST) based paraphrase generation approach is also presented. Significant error rate reductions of 0.5–0.6% absolute were obtained over the baseline n-gram LMs on two state-of-the-art recognition tasks for English conversational telephone speech and Mandarin Chinese broadcast speech using a paraphrastic multi-level LM modelling both word and phrase sequences. When it is further combined with word and phrase level feed-forward neural network LMs, a significant error rate reduction of 0.9% absolute (9% relative) and 0.5% absolute (5% relative) were obtained over the baseline n-gram and neural network LMs respectively.  相似文献   

18.
A neurofuzzy system combines the positive attributes of a neural network and a fuzzy system by providing a transparent framework for representing linguistic rules with well defined modelling and learning characteristics. Unfortunately, their application is limited to problems involving a small number of input variables by the curse of dimensionality where the the size of the rule base and the training set increase as an exponential function of the input dimension. The curse can be alleviated by a number of approaches but one which has recently received much attention is the exploitation of redundancy. Many functions can be adequately approximated by an additive model whose output is a sum over several smaller dimensional subrnodels. This technique is called global partitioning and the aim of an algorithm designed to construct the approximation is to automatically determine the number of submodels and the subset of input variables for each submodel. The construction algorithm is an iterative process where each iteration must identify a set of candidate refinements and evaluate the associated candidate models. This leads naturally to the problem of how to train the candidate models and the approach taken depends on whether they contain one or multiple submodels.  相似文献   

19.
Self-stabilizing distributed control is often modeled by token abstractions. A system with a single token may implement mutual exclusion; a system with multiple tokens may ensure that immediate neighbors do not simultaneously enjoy a privilege. In models of process control, tokens may represent physical objects whose movement is controlled. The problem studied in this paper is to ensure that a synchronous system with m circulating tokens has at least d distance between tokens. This problem is first considered in a ring where d is given whilst m and the ring size n are unknown. The protocol solving this problem can be uniform, with all processes running the same program, or it can be non-uniform, with some processes acting only as token relays. The protocol for this first problem is simple, and can be expressed with a Petri net formalism. A second problem is to maximize d when m is given, and n is unknown. For the second problem, this paper presents a non-uniform protocol with a single corrective process.  相似文献   

20.
In material science studies, it is often desired to know in advance the fracture toughness of a material, which is related to the released energy during its compact tension (CT) test to prevent catastrophic failure. In this paper, two frameworks are proposed for automatic model elicitation from experimental data to predict the fracture energy released during the CT test of X100 pipeline steel. The two models including an adaptive rule-based fuzzy modelling approach and a double-loop based neural network model, relate the load, crack mouth opening displacement (CMOD) and crack length to the released energies during this test. The relationship between how fracture is propagated and the fracture energy is further investigated in greater detail. To improve the performances of the models, a Gaussian Mixture Model (GMM)-based error compensation strategy which enables one monitor the error distributions of the predicted result is integrated in the model validation stage. This can help isolate the error distribution pattern and to establish the correlations with the predictions from the deterministic models. This is the first time a data-driven approach has been used in this fashion on an application that has conventionally been handled using finite element methods or physical models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号