首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Several approaches within the exploratory modelling literature—each with strengths and limitations—have been introduced to address the complexity and uncertainty of decision problems. Recent model-based approaches for decision making emphasise the advantage of mixing approaches from different areas in leveraging the strengths of each. This article shows how a multi-method lens to the design of decision-making approaches can better address different characteristics of multi-objective decision problems under deep uncertainty. The article focuses on interactions between two broad areas in model-based decision making: exploratory modelling and multi-objective optimisation. The article reviews this literature using a specific multi-method lens to analyse previous researches and to identify the knowledge gap. The article then addresses this gap by demonstrating a multi-method approach for designing adaptive robust solutions. The suggested approach uses a Pareto optimal search from multi-objective optimisation for enumerating alternative solutions. It also uses Robust Decision Making and Dynamic Adaptive Policy Pathways approaches from exploratory modelling for analysing the robustness of enumerated solutions in the face of many future scenarios. A hypothetical case study is used to illustrate how the approach can be applied. The article concludes that a new lens from a multi-method design perspective is needed on exploratory modelling to provide practical guidance into how to combine exploratory modelling techniques, to shed light on exiting knowledge gaps and to open up a range of potential combinations of exiting approaches for leveraging the strengths of each.  相似文献   

2.
The self-organizing map (SOM) constitutes a powerful method for exploratory analysis of process data that is based on the so-called dimension reduction approach. The SOM algorithm defines a smooth non-linear mapping from a high-dimensional input space onto a low-dimensional output space (typically 2D) that preserves the most significant information about the input data distribution. This mapping can be used to obtain 2D representations (component planes, u-matrix, etc.) of the process variables that reveal the main static relationships, allowing to exploit available data and process-related knowledge in an efficient way for supervision and optimization purposes. In this work we present a complementary methodology to represent also the process dynamics in the SOM visualization, using maps in which every point represents a local dynamical behavior of the process and that, in addition, are consistent with the component planes of the process variables. The proposed methodology allows in this way to find relationships between the process variables and the process dynamics, opening important ways for the exploratory analysis of the dynamic behavior in non-linear and non-stationary processes. Experimental results from real data of two different industrial processes are also described, showing the possibilities of the proposed approach.  相似文献   

3.
Using a step-by-step approach, a system-specific non-mechanistic model based on statistical analyses of real-world response of process parameters to bulking, was formulated for an aerobic Sequencing Batch Reactor (SBR). The approach involved 2 phases — “Diagnosis” and “Analysis”. In “Diagnosis” phase, model parameters were identified via statistical rules and existing knowledge base on bulking, while in the “Analysis” phase, the parameters were modelled with the Sludge Volume Index (SVI) to formulate the non-mechanistic model. Validation results yielded satisfactory results for the modelling analysis. The statistical approach was executed using a multivariate data analysis package and resulted in a non-mechanistic model that was practically appealing to practitioners for its ease of use as a predictive tool for bulking. This approach, being dependent only on data analysis and statistical modelling, required no bench-scale experiments to be conducted for preliminary identification of the bulking problem. Furthermore, such an approach being algorithmic in nature, could potentially form the concept behind design of expert systems for very rapid, and economical diagnosis of bulking problems before a more in-depth and slow analytical approach involving costly laboratory procedures is adopted.  相似文献   

4.
The paper presents a stochastic methodology for handling uncertainty in process development as part of a general framework for batch and continuous process models. The method combines systematic modelling procedures with Hammersley sampling based uncertainty analysis and a range of sample-based sensitivity analysis techniques which are used to quantify predicted performance uncertainty and identify key uncertainty contributions. The methodology was implemented on a batch chemical reactor process and some clear recommendations as to how to reduce the uncertainty in the main output variables are obtained.  相似文献   

5.
基于数据耕耘的探索性仿真分析框架设计   总被引:2,自引:1,他引:1  
基于数据耕耘的相关理论,设计了一个基于数据耕耘的探索性仿真分析框架,可以作为探索性仿真分析的支撑系统.探索性仿真分析框架是由探索空间定义,多想定运行,仿真结果分析等三个模块构成.其中探索空间定义主要完成待探索参数空间的设置,多想定运行主要完成探索空间运行计算任务,仿真结果分析主要完成对探索计算结果进行综合分析,为分析人员提供决策支持.  相似文献   

6.
The method of fuzzy-model-based control has emerged as an alternative approach to the solution of analysis and synthesis problems associated with plants that exhibit complex non-linear behaviour. At present, the literature in this field has addressed the control design problem related to the stabilization of state-space fuzzy models. In practical situations, however, where perturbations exist in the state-space model, the problem becomes one of robust stabilization that has yet to be posed and solved. The present paper contributes in this direction through the development of a framework that exploits the distinctive property of the fuzzy model as the convex hull of linear system matrices. Using such a quasi-linear model structure, the robust stabilization of complex non-linear systems, against modelling error and parametric uncertainty, based on static state or dynamic output feedback, is reduced to a linear matrix inequality (LMI) problem.  相似文献   

7.
The design of embedded systems is being challenged by their growing complexity and tight performance requirements. This paper presents the COMPLEX UML/MARTE Design Space Exploration methodology, an approach based on a novel combination of Model Driven Engineering (MDE), Electronic System Level (ESL) and design exploration technologies. The proposed framework enables capturing the set of possible design solutions, that is, the design space, in an abstract, standard and graphical way by relying on UML and the standard MARTE profile. From that UML/MARTE based model, the automated generation framework proposed produces an executable, configurable and fast performance model which includes functional code of the application components. This generated model integrates an XML-based interface for communication with the tool which steers the exploration. This way, the DSE loop iterations are efficiently performed, without user intervention, avoiding slow manual editions, or regeneration of the performance model. The novel DSE suited modelling features of the methodology are shown in detail. The paper also presents the performance model generation framework, including the enhancements with regard the previous simulation and estimation technology, and the exploration technology. The paper uses an EFR vocoder system example for showing the methodology and for demonstrative results.  相似文献   

8.
Refinement is a powerful mechanism for mastering the complexities that arise when formally modelling systems. Refinement also brings with it additional proof obligations—requiring a developer to discover properties relating to their design decisions. With the goal of reducing this burden, we have investigated how a general purpose automated theory formation tool, HR, can be used to automate the discovery of such properties within the context of the Event-B formal modelling framework. This gave rise to an integrated approach to automated invariant discovery. In addition to formal modelling and automated theory formation, our approach relies upon the simulation of system models as a key input to the invariant discovery process. Moreover we have developed a set of heuristics which, when coupled with automated proof-failure analysis, have enabled us to effectively tailor HR to the needs of Event-B developments. Drawing in part upon case study material from the literature, we have achieved some promising experimental results. While our focus has been on Event-B, we believe that our approach could be applied more widely to formal modelling frameworks which support simulation.  相似文献   

9.
The development process for spacecraft control systems relies heavily on modelling and simulation tools for spacecraft dynamics. For this reason, there is an increasing need for adequate design tools in order to cope efficiently with tightening budgets for space missions. The paper discusses the main issues related to the modelling and simulation of satellite dynamics for control purposes, and then presents an object-oriented modelling framework, implemented as a Modelica library. The proposed approach allows a unified approach to a range of problems spanning from initial mission design and actuator sizing phases, down to detailed closed-loop simulation of the control system, including realistic models of sensors and actuators. It also promotes the reuse of modelling knowledge among similar missions, thus minimizing the design effort for any new project. The proposed framework and the Modelica library are demonstrated by several illustrative case studies.  相似文献   

10.
The article presents an enhanced multilayered iterative algorithm-group method of data handling (MIA-GMDH)-type network, discusses a comprehensive design methodology and carries out some numerical experiments which encompass system prediction and modelling. The method presented in this article is an enhancement of self-organising polynomial GMDH with several specific improved features – coefficient rounding and thresholding schemes and semi-randomised selection approach to pruning. The experiments carried out include representative time series prediction (gas furnace process data) and process modelling (investigating the milligrams of vitamin B2 per gram of turnip greens and drilling cutting force modelling). The results in this article show promising potential of self-organising network methodology in the field of both prediction and modelling applications.  相似文献   

11.
The energy-efficient building design requires building performance simulation (BPS) to compare multiple design options for their energy performance. However, at the early stage, BPS is often ignored, due to uncertainty, lack of details, and computational time. This article studies probabilistic and deterministic approaches to treat uncertainty; detailed and simplified zoning for creating zones; and dynamic simulation and machine learning for making energy predictions. A state-of-the-art approach, such as dynamic simulation, provide a reliable estimate of energy demand, but computationally expensive. Reducing computational time requires the use of an alternative approach, such as a machine learning (ML) model. However, an alternative approach will cause a prediction gap, and its effect on comparing options needs to be investigated. A plugin for Building information modelling (BIM) modelling tool has been developed to perform BPS using various approaches. These approaches have been tested for an office building with five design options. A method using the probabilistic approach to treat uncertainty, detailed zoning to create zones, and EnergyPlus to predict energy is treated as the reference method. The deterministic or ML approach has a small prediction gap, and the comparison results are similar to the reference method. The simplified model approach has a large prediction gap and only makes only 40% comparison results are similar to the reference method. These findings are useful to develop a BIM integrated tool to compare options at the early design stage and ascertain which approach should be adopted in a time-constraint situation.  相似文献   

12.
In this paper, we study a novel approach to spoken language recognition using an ensemble of binary classifiers. In this framework, we begin by representing a speech utterance with a high-dimensional feature vector such as the phonotactic characteristics or the polynomial expansion of cepstral features. A binary classifier can be built based on such feature vectors. We adopt a distributed output coding strategy in ensemble classifier design, where we decompose a multiclass language recognition problem into many binary classification tasks, each of which addresses a language recognition subtask by using a component classifier. Then, we combine the results of the component classifiers to form an output code as a hypothesized solution to the overall language recognition problem. In this way, we effectively project high-dimensional feature vectors into a tractable low-dimensional space, yet maintaining language discriminative characteristics of the spoken utterances. By fusing the output codes from both phonotactic features and cepstral features, we achieve equal-error-rates of 1.38% and 3.20% for 30-s trials on the 2003 and 2005 NIST language recognition evaluation databases.  相似文献   

13.
Deployment is a fundamental issue in Wireless Sensor Networks (WSNs). Indeed, the number and locations of sensors determine the topology of the WSN, which will further influence its performance. Usually, the sensor locations are precomputed based on a “perfect” sensor coverage model. However, in reality, there is an inherent uncertainty and imprecision associated with sensor readings. This issue impinges upon the success of any WSN deployment, and it is therefore important to consider it at the design stage. In contrast to existing work, this paper investigates the belief functions theory to design a unified approach for robust uncertainty-aware WSNs deployment. Specifically, we address the issue of handling uncertainty and information fusion for an efficient WSNs deployment by exploiting the belief functions reasoning framework. We present a flexible framework for target/event detection within the transferable belief model. Using this framework, we propose uncertainty-aware deployment algorithms that are able to determine the minimum number of sensors as well as their locations in such a way that full area coverage is provided. Related issues, such as connectivity, preferential coverage, challenging environments and sensor reliability, are also discussed. Simulation results, based on both synthetic data set and data traces collected in a real deployment for vehicle detection, are provided to demonstrate the ability of our approach to achieve an efficient WSNs deployment by exploiting a collaborative target/event detection scheme. Using the devised approach, we successfully deploy an experimental testbed for motion detection. The obtained results are reported, supported by comparison with other works.  相似文献   

14.
The present study proposes a General Probabilistic Framework (GPF) for uncertainty and global sensitivity analysis of deterministic models in which, in addition to scalar inputs, non-scalar and correlated inputs can be considered as well. The analysis is conducted with the variance-based approach of Sobol/Saltelli where first and total sensitivity indices are estimated. The results of the framework can be used in a loop for model improvement, parameter estimation or model simplification. The framework is applied to SWAP, a 1D hydrological model for the transport of water, solutes and heat in unsaturated and saturated soils. The sources of uncertainty are grouped in five main classes: model structure (soil discretization), input (weather data), time-varying (crop) parameters, scalar parameters (soil properties) and observations (measured soil moisture). For each source of uncertainty, different realizations are created based on direct monitoring activities. Uncertainty of evapotranspiration, soil moisture in the root zone and bottom fluxes below the root zone are considered in the analysis. The results show that the sources of uncertainty are different for each output considered and it is necessary to consider multiple output variables for a proper assessment of the model. Improvements on the performance of the model can be achieved reducing the uncertainty in the observations, in the soil parameters and in the weather data. Overall, the study shows the capability of the GPF to quantify the relative contribution of the different sources of uncertainty and to identify the priorities required to improve the performance of the model. The proposed framework can be extended to a wide variety of modelling applications, also when direct measurements of model output are not available.  相似文献   

15.
Granular computing is a computational paradigm that mimics human cognition in terms of grouping similar information together. Compatibility operators such as cardinality, orientation, density, and multidimensional length act on both in raw data and information granules which are formed from raw data providing a framework for human-like information processing where information granulation is intrinsic. Granular computing, as a computational concept, is not new, however it is only relatively recent when this concept has been formalised computationally via the use of Computational Intelligence methods such as Fuzzy Logic and Rough Sets. Neutrosophy is a unifying field in logics that extents the concept of fuzzy sets into a three-valued logic that uses an indeterminacy value, and it is the basis of neutrosophic logic, neutrosophic probability, neutrosophic statistics and interval valued neutrosophic theory. In this paper we present a new framework for creating Granular Computing Neural-Fuzzy modelling structures via the use of Neutrosophic Logic to address the issue of uncertainty during the data granulation process. The theoretical and computational aspects of the approach are presented and discussed in this paper, as well as a case study using real industrial data. The case study under investigation is the predictive modelling of the Charpy Toughness of heat-treated steel; a process that exhibits very high uncertainty in the measurements due to the thermomechanical complexity of the Charpy test itself. The results show that the proposed approach leads to more meaningful and simpler granular models, with a better generalisation performance as compared to other recent modelling attempts on the same data set.  相似文献   

16.
A rigorous validation for the use of a set of linear time‐invariant models as a surrogate in the design of controllers for uncertain nonlinear systems, which are invertible as one‐to‐one operators, such as used in the nonlinear quantitative feedback theory (NLQFT) design methodology has been given by Baños and Bailey. This paper presents a similar validation but weakens the requirement on the invertibility of the nonlinear plant by application of Kakutani's fixed‐point theorem and an incremental gain constraint on the plant within its operational envelope. The set of linear time‐invariant models to be used for design is shown to be an extension (termed here the linear time‐invariant extension—LTIE) of the nonlinear plant restricted to the desired output operating space. A new non‐parametric approach to the modelling of the LTIE is proposed which is based on Fourier transforms of the plant I/O data and which accordingly may be based solely on experimental testing without the need for an explicit parametric plant model. This new approach thus extends the application of robust linear controller design methods (including those of NLQFT) to nonlinear plants with set‐valued (multi‐valued) inverses such as those containing backlash and also to plants for which explicit parametric models are difficult to obtain. The method is illustrated by application of the non‐parametric approach to an NLQFT tracking controller design for a mechanical backlashed servomechanism problem. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
This article describes a standardised way to build context-aware global smart space applications using information that is distributed across independent (legacy, sensor-enabled, and embedded) systems by exploiting the overlapping spatial and temporal attributes of the information maintained by these systems. The framework supports a spatial programming model based on a topographical approach to modelling space that enables systems to independently define and use potentially overlapping spatial context in a consistent manner and in contrast to topological approaches, in which geographical relationships between objects are described explicitly. This approach is supported by an extensible data model that implicitly captures the relationships between information provided by separate underlying systems and facilitates the incremental construction of global smart spaces since the underlying systems to be incorporated are largely decoupled. The framework has been evaluated using a prototype that integrates legacy systems and context-aware services for multi-modal urban journey planning and for visualising traffic congestion.  相似文献   

18.
19.
While Model-Based Systems Engineering (MBSE) improves the ambiguity problem of the conventional document-based way, it brings management complexity. Faced with the complexity, one of the core issues that companies care about is how to effectively evaluate, predict, and manage it in the early system design stage. The inaccuracy of contemporary complexity measurement approaches still exits due to the inconsistency between the actual design process in physical space and the theoretical simulation in virtual space. Digital Twin (DT) provides a promising way to alleviate the problem by bridging the physical space and virtual space. Aiming to integrate DT with MBSE for the system design complexity analysis and prediction, based on previous work, an integration framework named System Design Digital Twin in 5 Dimensions was introduced from a knowledge perspective. The framework provides services for design complexity measurement, effort estimation, and change propagation prediction. Then, to represent the system design digital twin in a unified way, a modeling profile is constructed through SysML stereotypes. The modeling profile includes System design digital model in virtual space profile, system services profile, relationships profile and digital twin data profile. Finally, the system design of a cube-satellite space mission demonstrates the proposed unfiled modeling approach.  相似文献   

20.
The design and implementation of effective environmental policies need to be informed by a holistic understanding of the system processes (biophysical, social and economic), their complex interactions, and how they respond to various changes. Models, integrating different system processes into a unified framework, are seen as useful tools to help analyse alternatives with stakeholders, assess their outcomes, and communicate results in a transparent way. This paper reviews five common approaches or model types that have the capacity to integrate knowledge by developing models that can accommodate multiple issues, values, scales and uncertainty considerations, as well as facilitate stakeholder engagement. The approaches considered are: systems dynamics, Bayesian networks, coupled component models, agent-based models and knowledge-based models (also referred to as expert systems). We start by discussing several considerations in model development, such as the purpose of model building, the availability of qualitative versus quantitative data for model specification, the level of spatio-temporal detail required, and treatment of uncertainty. These considerations and a review of applications are then used to develop a framework that aims to assist modellers and model users in the choice of an appropriate modelling approach for their integrated assessment applications and that enables more effective learning in interdisciplinary settings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号