首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Many of the problems addressed through engineering analysis include a set of regulatory (or other) probabilistic requirements that must be demonstrated with some degree of confidence through the analysis. Problems cast in this environment can pose new challenges for computational analyses in both model validation and model-based prediction. The “regulatory problems” given for the “Sandia challenge problems exercise”, while relatively simple, provide an opportunity to demonstrate methods that address these challenges. This paper describes and illustrates methods that can be useful in analysis of the regulatory problem. Specifically, we discuss:
(1) an approach for quantifying variability and uncertainty separately to assess the regulatory requirements and provide a statement of confidence; and
(2) a general validation metric to focus the validation process on a specific range of the predictive distribution (the predictions near the regulatory threshold).
These methods are illustrated using the challenge problems. Solutions are provided for both the static frame and structural dynamics problems.
Keywords: Regulatory problem; Calibration; Model validation; Model-based prediction  相似文献   

3.
Using Bayesian Networks to Manage Uncertainty in Student Modeling   总被引:8,自引:1,他引:8  
When a tutoring system aims to provide students with interactive help, it needs to know what knowledge the student has and what goals the student is currently trying to achieve. That is, it must do both assessment and plan recognition. These modeling tasks involve a high level of uncertainty when students are allowed to follow various lines of reasoning and are not required to show all their reasoning explicitly. We use Bayesian networks as a comprehensive, sound formalism to handle this uncertainty. Using Bayesian networks, we have devised the probabilistic student models for Andes, a tutoring system for Newtonian physics whose philosophy is to maximize student initiative and freedom during the pedagogical interaction. Andes’ models provide long-term knowledge assessment, plan recognition, and prediction of students’ actions during problem solving, as well as assessment of students’ knowledge and understanding as students read and explain worked out examples. In this paper, we describe the basic mechanisms that allow Andes’ student models to soundly perform assessment and plan recognition, as well as the Bayesian network solutions to issues that arose in scaling up the model to a full-scale, field evaluated application. We also summarize the results of several evaluations of Andes which provide evidence on the accuracy of its student models.This revised version was published online in July 2005 with corrections to the author name VanLehn.  相似文献   

4.
5.
A probabilistic construction of model validation   总被引:1,自引:0,他引:1  
We describe a procedure to assess the predictive accuracy of process models subject to approximation error and uncertainty. The proposed approach is a functional analysis-based probabilistic approach for which we represent random quantities using polynomial chaos expansions (PCEs). The approach permits the formulation of the uncertainty assessment in validation, a significant component of the process, as a problem of approximation theory. It has two essential parts. First, a statistical procedure is implemented to calibrate uncertain parameters of the candidate model from experimental or model-based measurements. Such a calibration technique employs PCEs to represent the inherent uncertainty of the model parameters. Based on the asymptotic behavior of the statistical parameter estimator, the associated PCE coefficients are then characterized as independent random quantities to represent epistemic uncertainty due to lack of information. Second, a simple hypothesis test is implemented to explore the validation of the computational model assumed for the physics of the problem. The above validation path is implemented for the case of dynamical system validation challenge exercise.  相似文献   

6.
This paper describes a “top-down” uncertainty quantification (UQ) approach for calibration, validation and predictive accuracy assessment of the SNL Validation Workshop Structural Dynamics Challenge Problem. The top-down UQ approach differs from the more conventional (“bottom-up”) approach in that correlated statistical analysis is performed directly with the modal characteristics (frequencies, mode shapes and damping ratios) rather than using the modal characteristics to derive the statistics of physical model parameters (springs, masses and viscous damping elements in the present application). In this application, a stochastic subsystem model is coupled with a deterministic subsystem model to analyze stochastic system response to stochastic forcing functions. The weak nonlinearity of the stochastic subsystem was characterized by testing it at three different input levels, low, medium and high. The calibrated subsystem models were validated with additional test data using published NASA and Air Force validation criteria. The validated subsystem models were first installed in the accreditation test bed where system response simulations involving stochastic shock-type force inputs were conducted. The validated stochastic subsystem model was then installed in the target application and simulations involving limited duration segments of stationary random vibration excitation were conducted.  相似文献   

7.
8.
A crucial step in the modeling of a system is to determine the values of the parameters to use in the model. In this paper we assume that we have a set of measurements collected from an operational system, and that an appropriate model of the system (e.g., based on queueing theory) has been developed. Not infrequently proper values for certain parameters of this model may be difficult to estimate from available data (because the corresponding parameters have unclear physical meaning or because they cannot be directly obtained from available measurements, etc.). Hence, we need a technique to determine the missing parameter values, i.e., to calibrate the model.As an alternative to unscalable “brute force” technique, we propose to view model calibration as a non-linear optimization problem with constraints. The resulting method is conceptually simple and easy to implement. Our contribution is twofold. First, we propose improved definitions of the “objective function” to quantify the “distance” between performance indices produced by the model and the values obtained from measurements. Second, we develop a customized derivative-free optimization (DFO) technique whose original feature is the ability to allow temporary constraint violations. This technique allows us to solve this optimization problem accurately, thereby providing the “right” parameter values. We illustrate our method using two simple real-life case studies.  相似文献   

9.
This paper investigates how to optimize the facility location strategy such as to maximize the intercepted customer flow, while accounting for “flow-by” customers’ path choice behaviors and their travel cost limitation. A bi-level programming static model is constructed for this problem. An heuristic based on a greedy search is designed to solve it. Consequently, we proposed a chance constrained bi-level model with stochastic flow and fuzzy trip cost threshold level. For solving this uncertain model more efficiently, we integrate the simplex method, genetic algorithm, stochastic simulation and fuzzy simulation to design a hybrid intelligent algorithm. Some examples are generated randomly to illustrate the performance and the effectiveness of the proposed algorithms.  相似文献   

10.
A statistical procedure for calibration and validation is addressed as an industrial application for the analysis problem of piston insertion into the housing in the pyrotechnically actuated device. Three parameters are identified in the model that affect the solution greatly but they are not known a priori. Bayesian approach is employed to calibrate these parameters in the form of distributions, which account for the uncertainty of the model and test data. In order to validate the model, similar new problems are introduced, analyzed and tested for validation purpose. As a result, the predictions in the new problems are found to work equally well as in the calibration problem, which suggests that it is useful in the subsequent new design without additional test procedure.  相似文献   

11.
Knowledge-base V&V primarily addresses the question: “Does my knowledge-base contain the right answer and can I arrive at it?” One of the main goals of our work is to properly encapsulate the knowledge representation and allow the expert to work with manageable-sized chunks of the knowledge-base. This work develops a new methodology for the verification and validation of Bayesian knowledge-bases that assists in constructing and testing such knowledge-bases. Assistance takes the form of ensuring that the knowledge is syntactically correct, correcting “imperfect” knowledge, and also identifying when the current knowledge-base is insufficient as well as suggesting ways to resolve this insufficiency. The basis of our approach is the use of probabilistic network models of knowledge. This provides a framework for formally defining and working on the problems of uncertainty in the knowledge-base.

In this paper, we examine the project which is concerned with assisting a human expert to build knowledge-based systems under uncertainty. We focus on how verification and validation are currently achieved in .  相似文献   


12.
This paper describes the Generic Automated Marking Environment (GAME) and provides a detailed analysis of its performance in assessing student programming projects and exercises. GAME has been designed to automatically assess programming assignments written in a variety of languages based on the “structure” of the source code and the correctness of the program’s output. Currently, the system is able to mark programs written in Java, C++ and the C language. To use the system, instructors are required to provide a simple “marking schema” for each given assessment item, which includes pertinent information such as the location of files and the model solution. In this research, GAME has been tested on a number of student programming exercises and assignments and its performance has been compared against that of a human marker. An in-depth statistical analysis of the comparison is presented, providing encouraging results and directions for employing GAME as a tool for teaching and learning.  相似文献   

13.
Constructing an accurate effort prediction model is a challenge in Software Engineering. This paper presents three Bayesian statistical software effort prediction models for database-oriented software systems, which are developed using a specific 4GL toolsuite. The models consist of specification-based software size metrics and development team's productivity metric. The models are constructed based on the subjective knowledge of human expert and calibrated using empirical data collected from 17 software systems developed in the target environment. The models' predictive accuracy is evaluated using subsets of the same data, which were not used for the models' calibration. The results show that the models have achieved very good predictive accuracy in terms of MMRE and pred measures. Hence, it is confirmed that the Bayesian statistical models can predict effort successfully in the target environment. In comparison with commonly used multiple linear regression models, the Bayesian statistical models'predictive accuracy is equivalent in general. However, when the number of software systems used for the models' calibration becomes smaller than five, the predictive accuracy of the best Bayesian statistical models are significantly better than the multiple linear regression model. This result suggests that the Bayesian statistical models would be a better choice when software organizations/practitioners do not posses sufficient empirical data for the models' calibration. The authors expect these findings to encourage more researchers to investigate the use of Bayesian statistical models for predicting software effort.  相似文献   

14.
Push technology automates the information delivery process by not requiring users to request for the information they need. Wireless has experienced explosive growth in recent years, and “push” will be the predominant wireless service delivery paradigm of the future. A wide variety of services, alerts and messages, such as promotional content, will be delivered to consumers’ phones or PDA. However, to push information to a wireless device can be a challenge because of the problem of intermittent communication links and resource constraints on wireless devices as well as limited bandwidth. This paper explores an efficient multicasting mechanism that “pushes” pre-specified information to groups of wireless devices. The mechanism operates with limited bandwidth and also overcomes the connectivity problem. Based on the above concept, we have designed and implemented a system to multicast sales information via wireless technology. The system is message-oriented and JMS compliant.  相似文献   

15.
Thermal problem solution using a surrogate model clustering technique   总被引:1,自引:0,他引:1  
The thermal problem defined for the validation challenge workshop involves a simple one-dimensional slab geometry with a defined heat flux at the front face, adiabatic conditions at the rear face, and a provided baseline predictive simulation model to be used to simulate the time-dependent heatup of the slab. This paper will discuss a clustering methodology using a surrogate heat transfer algorithm that allows propagation of the uncertainties in the model parameters using a very limited series of full simulations. This clustering methodology can be used when the predictive model to be run is very expensive, and only a few simulation runs are possible. A series of time-dependent statistical comparisons designed to validate the model against experimental data provided in the problem formulation will also be presented, and limitations of the approach discussed. The purpose of this paper is to represent methods of propagation of uncertainty with limited computer runs, validation with uncertain data, and decision-making under uncertainty. The final results of the analysis indicate that the there is approximately 95% confidence that the regulatory criteria under consideration would be failed given the high level of physical data provided.  相似文献   

16.
Price-driven coordination method for solving plant-wide MPC problems   总被引:1,自引:0,他引:1  
In large-scale model predictive control (MPC) applications, such as plant-wide control, two possible approaches to MPC implementation are centralized and decentralized MPC schemes. These represent the two extremes in the “trade-off” among the desired characteristics of an industrial MPC system, namely accuracy, reliability and maintainability. To achieve optimal plant operations, coordination of decentralized MPC controllers has been identified as both an opportunity and a challenge. Typically, plant-wide MPC problem can be formulated as a large-scale quadratic program (QP) with linking equality constraints. Such problems can be decomposed and solved with the price-driven coordination method and on-line solutions of these structured large-scale optimization problems require an efficient price-adjustment strategy to find an “equilibrium price”. This work develops an efficient price-adjustment algorithm based on Newton’s method, in which sensitivity analysis and active set change identification techniques are employed. With the off-diagonal element abstraction technique and the enhanced priced driven coordination algorithm, a coordinated, decentralized MPC framework is proposed. Several case studies show that the proposed coordination-based decentralized MPC scheme is an effective approach to plant-wide MPC applications, which provides a high degree of reliability and accuracy at a reasonable computational load.  相似文献   

17.
We consider time-domain model validation for sampled-data systems with H-compatible uncertainty models. These uncertainty models consist of a nominal continuous-time plant model together with given bounds for system uncertainty and signal disturbances. The validation problem is to determine whether or not a given discrete sampled input-output data record is consistent with a postulated uncertainty model. Based on continuous-time interpolation theorems, we provide validation algorithms for unstructured, additive uncertainty models that reduce to convex programming. We treat both linear time-invariant and linear time-varying modeling uncertainty sets  相似文献   

18.
19.
Schumpeter maintained that oscillations of macroeconomic variables are only the “secondary wave” of business cycles, a reflex of more fundamental “primary waves” at the microeconomic level caused by the innovative activity of entrepreneurs. Uniting Schumpeter’s concern for innovation with Keynes’ concern for uncertainty and expectations formation, this article focuses on the behaviour of entrepreneurs confronting uncertainty caused by innovation. Entrepreneurs’ behaviour is reconstructed by modelling the functioning of their cognitive processes when innovations appear. Recognition of the possibilities opened up by a successful innovation generates a state of optimism in the minds of single entrepreneurs, which eventually propagates to the whole economy triggering an investments upswing. Likewise, unsuccessful innovations can trigger a downswing.  相似文献   

20.
Cellular Automata (CA) models are widely used to study spatial dynamics of urban growth and evolving patterns of land use. One complication across CA approaches is the relatively short period of data available for calibration, providing sparse information on patterns of change and presenting problematic signal-to-noise ratios. To overcome the problem of short-term calibration, this study investigates a novel approach in which the model is calibrated based on the urban morphological patterns that emerge from a simulation starting from urban genesis, i.e., a land cover map completely void of urban land. The application of the model uses the calibrated parameters to simulate urban growth forward in time from a known urban configuration.This approach to calibration is embedded in a new framework for the calibration and validation of a Constrained Cellular Automata (CCA) model of urban growth. The investigated model uses just four parameters to reflect processes of spatial agglomeration and preservation of scarce non-urban land at multiple spatial scales and makes no use of ancillary layers such as zoning, accessibility, and physical suitability. As there are no anchor points that guide urban growth to specific locations, the parameter estimation uses a goodness-of-fit (GOF) measure that compares the built density distribution inspired by the literature on fractal urban form. The model calibration is a novel application of Markov Chain Monte Carlo Approximate Bayesian Computation (MCMC-ABC). This method provides an empirical distribution of parameter values that reflects model uncertainty. The validation uses multiple samples from the estimated parameters to quantify the propagation of model uncertainty to the validation measures.The framework is applied to two UK towns (Oxford and Swindon). The results, including cross-application of parameters, show that the models effectively capture the different urban growth patterns of both towns. For Oxford, the CCA correctly produces the pattern of scattered growth in the periphery, and for Swindon, the pattern of compact, concentric growth. The ability to identify different modes of growth has both a theoretical and practical significance. Existing land use patterns can be an important indicator of future trajectories. Planners can be provided with insight in alternative future trajectories, available decision space, and the cumulative effect of parcel-by-parcel planning decisions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号