首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13篇
  免费   0篇
建筑科学   4篇
能源动力   2篇
一般工业技术   2篇
原子能技术   1篇
自动化技术   4篇
  2018年   1篇
  2017年   1篇
  2015年   2篇
  2013年   4篇
  2012年   1篇
  2011年   4篇
排序方式: 共有13条查询结果,搜索用时 406 毫秒
1.
Traditionally, model calibration is formulated as a single objective problem, where fidelity to measurements is maximized by adjusting model parameters. In such a formulation however, the model with best fidelity merely represents an optimum compromise between various forms of errors and uncertainties and thus, multiple calibrated models can be found to demonstrate comparable fidelity producing non-unique solutions. To alleviate this problem, the authors formulate model calibration as a multi-objective problem with two distinct objectives: fidelity and robustness. Herein, robustness is defined as the maximum allowable uncertainty in calibrating model parameters with which the model continues to yield acceptable agreement with measurements. The proposed approach is demonstrated through the calibration of a finite element model of a steel moment resisting frame.  相似文献   
2.
3.
4.
Activities such as global sensitivity analysis, statistical effect screening, uncertainty propagation, or model calibration have become integral to the Verification and Validation (V&V) of numerical models and computer simulations. One of the goals of V&V is to assess prediction accuracy and uncertainty, which feeds directly into reliability analysis or the Quantification of Margin and Uncertainty (QMU) of engineered systems. Because these analyses involve multiple runs of a computer code, they can rapidly become computationally expensive. An alternative to Monte Carlo-like sampling is to combine a design of computer experiments to meta-modeling, and replace the potentially expensive computer simulation by a fast-running emulator. The surrogate can then be used to estimate sensitivities, propagate uncertainty, and calibrate model parameters at a fraction of the cost it would take to wrap a sampling algorithm or optimization solver around the physics-based code. Doing so, however, offers the risk to develop an incorrect emulator that erroneously approximates the “true-but-unknown” sensitivities of the physics-based code. We demonstrate the extent to which this occurs when Gaussian Process Modeling (GPM) emulators are trained in high-dimensional spaces using too-sparsely populated designs-of-experiments. Our illustration analyzes a variant of the Rosenbrock function in which several effects are made statistically insignificant while others are strongly coupled, therefore, mimicking a situation that is often encountered in practice. In this example, using a combination of GPM emulator and design-of-experiments leads to an incorrect approximation of the function. A mathematical proof of the origin of the problem is proposed. The adverse effects that too-sparsely populated designs may produce are discussed for the coverage of the design space, estimation of sensitivities, and calibration of parameters. This work attempts to raise awareness to the potential dangers of not allocating enough resources when exploring a design space to develop fast-running emulators.  相似文献   
5.
Conventional water pipeline leak detection surveys employ labour-intensive acoustic techniques, which are usually expensive and not amenable for continuous monitoring of distribution systems. Many previous studies attempted to address these limitations by proposing and evaluating a myriad of continuous, long-term monitoring techniques. However, these techniques have difficulty to identify leaks in the presence of pipeline system complexities (e.g. T-joints), offered limited compatibility with popular pipe materials (e.g. PVC), and were in some cases intrusive in nature. Recently, a non-intrusive pipeline surface vibration-based leak detection technique has been proposed to address some of the limitations of the previous studies. This new technique involves continuous monitoring of the change in the cross-spectral density of surface vibration measured at discrete locations along the pipeline. Previously, the capabilities of this technique have been demonstrated through an experimental campaign carried out on a simple pipeline set-up. This paper presents a follow-up evaluation of the new technique in a real-size experimental looped pipeline system located in a laboratory with complexities, such as junctions, bends and varying pipeline sizes. The results revealed the potential feasibility of the proposed technique to detect and assess the onset of single or multiple leaks in a complex system.  相似文献   
6.
Verification and validation (V&V) offers the potential to play an indispensable role in the development of credible models for the simulation of wind turbines. This paper highlights the development of a three‐dimensional finite element model of the CX‐100 wind turbine blade. The scientific hypothesis that we wish to confirm by applying V&V activities is that it is possible to develop a fast‐running model capable of predicting the low‐order vibration dynamics with sufficient accuracy. A computationally efficient model is achieved by segmenting the geometry of the blade into six sections only. It is further assumed that each cross section can be homogenized with isotropic material properties. The main objectives of V&V activities deployed are to, first, assess the extent to which these assumptions are justified and, second, to quantify the resulting prediction uncertainty. Designs of computer experiments are analyzed to understand the effects of parameter uncertainty and identify the significant sensitivities. A calibration of model parameters to natural frequencies predicted by the simplified model is performed in two steps with the use of, first, a free–free configuration of the blade and, second, a fixed–free configuration. This two‐step approach is convenient to decouple the material properties from parameters of the model that describe the boundary condition. Here, calibration is not formulated as an optimization problem. Instead, it is viewed as a problem of inference uncertainty quantification where measurements are used to learn the uncertainty of model parameters. Gaussian process models, statistical tests and Markov chain Monte Carlo sampling are combined to explore the (true but unknown) joint probability distribution of parameters that, when sampled, produces bounds of prediction uncertainty that are consistent with the experimental variability. An independent validation assessment follows the calibration and is applied to mode shape vectors. Despite the identification of isolated issues with the simulation code and model developed, the overarching conclusion is that the modeling strategy is sound and leads to an accurate‐enough, fast‐running simulation of blade dynamics. This publication is Part II of a two‐part effort that highlights the V&V steps required to develop a robust model of a wind turbine blade, where Part I emphasizes code verification and the quantification of numerical uncertainty. Approved for unlimited public release on August 26, 2011, LA‐UR‐11‐4997. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
7.
Abstract: In partitioned analysis of systems that are driven by the interaction of functionally distinct but strongly coupled constituents, the predictive accuracy of the simulation hinges on the accuracy of individual constituent models. Potential improvement in the predictive accuracy of the simulation that can be gained through improving a constituent model depends not only on the relative importance, but also on the inherent uncertainty and inaccuracy of that particular constituent. A need exists for prioritization of code development efforts to cost‐effectively allocate available resources to the constituents that require improvement the most. This article proposes a novel and quantitative code prioritization index to accomplish such a task and demonstrates its application on a case study of a steel frame with semirigid connections. Findings show that as high‐fidelity constituent models are integrated, the predictive ability of model‐based simulation is improved; however, the rate of improvement is dependent upon the sequence in which the constituents are improved.  相似文献   
8.
9.
Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M&S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M&S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges.This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in the design and licensing of evolving nuclear technology. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification - steps similar to the components of the traditional US Nuclear Regulatory Commission (NRC) licensing approach, with the exception of the calibration step. An enhanced calibration concept is introduced here, and is accomplished through data assimilation. The goal of this methodology is to enable best-estimate prediction of system behaviors in both normal and safety-related environments. This goal requires the additional steps of estimating the domain of validation, and quantification of uncertainties, allowing for the extension of results to areas of the validation domain that are not directly tested with experiments. These might include the extension of the M&S capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools.The proposed methodology is at a conceptual level. Upon maturity, and if considered favorably by the stakeholders, it could serve as a new framework for the next generation of the best estimate plus uncertainty (BEPU) licensing methodology that the NRC has developed. In order to achieve maturity, the methodology must be communicated to scientific, design, and regulatory stakeholders for discussion and debate. This paper is the first step in establishing that communication.  相似文献   
10.
In experiment-based validation, uncertainties and systematic biases in model predictions are reduced by either increasing the amount of experimental evidence available for model calibration—thereby mitigating prediction uncertainty—or increasing the rigor in the definition of physics and/or engineering principles—thereby mitigating prediction bias. Hence, decision makers must regularly choose between either allocating resources for experimentation or further code development. The authors propose a decision-making framework to assist in resource allocation strictly from the perspective of predictive maturity and demonstrate the application of this framework on a nontrivial problem of predicting the plastic deformation of polycrystals.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号