首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is widely known and accepted that Nash equilibrium suitably models agents' behavior in electricity markets, since it is coherent with the common sense of their simultaneous profits maximisation. In the literature, these approaches are usually addressed using deterministic representations, despite the fact that electricity markets are highly conditioned by the uncertainty in demand or in agents' bidding strategies. Only some equilibrium-modelling approaches under uncertainty can be found in the literature, most of them using probability distributions. However, probability approaches may lead to very complex formulations and generally require restrictive assumptions (such as normality or independence) that can hardly be verified in real complex problems. A conjectured-price-response equilibrium model that uses LR-possibility distributions to represent the uncertainty of the residual demand curves faced by the participant agents is proposed. Modelling the risk-aversion attitudes of the agents, the resulting possibilistic equilibrium is transformed into a simplified deterministic one, which is solved with a new globally convergent algorithm for variational inequalities problems. Some interesting results for a real-size electricity system show the robustness of this new approach when compared with other risk-neutral approaches.  相似文献   

2.
Small data-set learning problems are attracting more attention because of the short product lifecycles caused by the increasing pressure of global competition. Although statistical approaches and machine learning algorithms are widely applied to extract information from such data, these are basically developed on the assumption that training samples can represent the properties of the whole population. However, as the properties that the training samples contain are limited, the knowledge that the learning algorithms extract may also be deficient. Virtual sample generation approaches, used as a kind of data pretreatment, have proved their effectiveness when handling small data-set problems. By considering the relationships among attributes in the value generation procedure, this research proposes a non-parametric process to learn the trend similarities among attributes, and then uses these to estimate the corresponding ranges that attribute values may be located in when other attribute values are given. The ranges of the attribute values of the virtual samples are then stepwise estimated using the triangular membership functions (MFs) built to represent the attribute sample distributions. In the experiment, two real cases are examined with four modelling tools, including the M5′ model tree (M5′), multiple linear regression, support vector regression and back-propagation neural network. The results show that the forecasting accuracies of the four modelling tools are improved when training sets contain virtual samples. In addition, the outcomes of the proposed procedure show significantly smaller predictive errors than those of other approaches.  相似文献   

3.
C. Jiang  X. Han  G. Y. Lu 《Acta Mechanica》2012,223(9):2021-2038
In traditional reliability analysis, the uncertain parameters are generally treated by some ideal probability distributions with infinite tails, which, however, seems inconsistent with the practical situations as nearly all the uncertain parameters in engineering structures will get their values within a limited interval. To eliminate such an inconsistence and thereby improve the precision of the reliability analysis, the truncated probability distributions are then employed to quantify the uncertainty in this paper, and a corresponding reliability analysis method is developed. Two cases of positional relations are summarized for the uncertainty domain and the failure surface according to whether their intersection set is non-empty or empty. The probability and non-probability convex model methods are employed to deal with these two cases, respectively, and based on it, a hybrid reliability model is then constructed for truncated distribution problems. An efficient approach is also provided to distinguish these two positional relations and thereby determine which one of the probability and non-probability methods should be used when computing a real hybrid reliability. Five numerical examples are investigated to demonstrate the effectiveness of the present method.  相似文献   

4.
This paper suggests a robust asset–liability management framework for investment products with guarantees, such as guaranteed investment contracts and equity-linked notes. Stochastic programming and robust optimization approaches are introduced to deal with data uncertainty in asset returns and interest rates. The statistical properties of the probability distributions of uncertain parameters are incorporated in the model through appropriately selected symmetric and asymmetric uncertainty sets. Practical data-driven approaches for implementation of the robust models are also discussed. Numerical results using generated and real market data are presented to illustrate the performance of the robust asset–liability management strategies. The robust investment strategies show better performance in unfavorable market regimes than traditional stochastic programming approaches. The effectiveness of robust investment strategies can be improved by calibrating carefully the shape and the size of the uncertainty sets for asset returns.  相似文献   

5.
6.
This paper develops a methodology to assess the validity of computational models when some quantities may be affected by epistemic uncertainty. Three types of epistemic uncertainty regarding input random variables - interval data, sparse point data, and probability distributions with parameter uncertainty - are considered. When the model inputs are described using sparse point data and/or interval data, a likelihood-based methodology is used to represent these variables as probability distributions. Two approaches - a parametric approach and a non-parametric approach - are pursued for this purpose. While the parametric approach leads to a family of distributions due to distribution parameter uncertainty, the principles of conditional probability and total probability can be used to integrate the family of distributions into a single distribution. The non-parametric approach directly yields a single probability distribution. The probabilistic model predictions are compared against experimental observations, which may again be point data or interval data. A generalized likelihood function is constructed for Bayesian updating, and the posterior distribution of the model output is estimated. The Bayes factor metric is extended to assess the validity of the model under both aleatory and epistemic uncertainty and to estimate the confidence in the model prediction. The proposed method is illustrated using a numerical example.  相似文献   

7.
传统的蒙特卡罗模拟方法在分析由于参数不确定性修正而引起的可靠度修正问题时效率较低。为此,提出了一种基于蒙特卡罗模拟的高效边坡可靠度修正方法,该方法主要包括2个关键步骤:1)根据参数初始分布利用蒙特卡罗模拟方法计算边坡的失效概率,并输出蒙特卡罗模拟的失效样本;2)利用参数统计特征值修正后的联合概率密度函数和蒙特卡罗模拟失效样本计算修正后边坡的失效概率。以两个边坡问题为例说明了所提方法的有效性。结果表明:所提出的方法在计算修正的失效概率过程中无需重新执行蒙特卡罗模拟,计算过程简单、计算效率高。此外,所提方法能够适用于隐式表达功能函数的边坡可靠度修正问题,并能够有效地解决单变量和多变量修正的边坡可靠度修正问题。  相似文献   

8.
Y. P. Li  S. L. Nie 《工程优选》2013,45(2):163-183
Innovative prevention, adaptation, and mitigation approaches as well as policies for sustainable flood management continue to be challenges faced by decision-makers. In this study, a mixed interval–fuzzy two-stage integer programming (IFTIP) method is developed for flood-diversion planning under uncertainty. This method improves upon the existing interval, fuzzy, and two-stage programming approaches by allowing uncertainties expressed as probability distributions, fuzzy sets, and discrete intervals to be directly incorporated within the optimization framework. In its modelling formulation, economic penalties as corrective measures against any infeasibilities arising because of a particular realization of the uncertainties are taken into account. The method can also be used for analysing a variety of policy scenarios that are associated with different levels of economic penalties. A management problem in terms of flood control is studied to illustrate the applicability of the proposed approach. The results indicate that reasonable solutions have been generated. They can provide desired flood-diversion alternatives and capacity-expansion schemes with a minimized system cost and a maximized safety level. The developed IFTIP is also applicable to other management problems that involve uncertainties presented in multiple formats as well as complexities in policy dynamics.  相似文献   

9.
Uncertainty, probability and information-gaps   总被引:1,自引:0,他引:1  
This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model.Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success?We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems.  相似文献   

10.
Evidential network is considered to have superiority in conducting reliability analysis for complex engineering systems with epistemic uncertainty. However, existing methods tend to result in combinational explosion when multistate systems are involved in the reliability analysis, which means the reliability analysis cost increases exponentially with the number of components and that of functioning states. Therefore, an enhanced reliability analysis method is proposed in this paper for reliability analysis and performance evaluation of multistate systems with epistemic uncertainty, through which the combination explosion can be significantly alleviated. Firstly, the functioning states of each component are sequenced according to utility functions. Secondly, the basic belief assignment (BBA) of each component is reassigned in terms of commonality function, through which the BBA defined in the power set space is represented by two extreme BBA distributions defined in the frame of discernment. Thirdly, the reliability intervals of the system states are calculated through evidential network, and the system performance level is computed. Two multistate system numerical examples are investigated to demonstrate the effectiveness and efficiency of the proposed method.  相似文献   

11.
12.
The statistical probability distribution of data should be known in advance, so that we can make some statistical inference based on the data and realize what information the data provides. Until now, a nonparametric goodness of fit test has been widely used in probability distribution recognition. However, such a procedure cannot guarantee a precise distribution recognition when only small data samples are available. In addition, the number of the divided groups will influence the results. This study proposes a neural network-based approach for probability distribution recognition. Two types of neural networks, backpropagation and learning vector quantization, are used in classifying normal, exponential, Weibull, Uniform, Chi-square, t, F, and Lognormal distributions. Implementation results demonstrate that the proposed approach outperforms the traditional statistical approach.  相似文献   

13.
Summary Multi-attribute utility analysis (MAUA) has emerged as a powerful tool for materials selection and evaluation. An operations research technique, MAUA has been used in a wide range of engineering areas, of which materials science and engineering is one of the more recent. Utility analysis affords a rational method of materials selection which avoids many of the fundamental logical difficulties of many widely used alternative approaches. However, MAUA has traditionally been used in materials selection problems only, in which there is certainty regarding the attribute levels of the alternatives. For many new technologies this is not the case. Another operations research technique, subjective probability assessment (SPA), can be used to address this issue. SPA makes it possible to measure a probabilistic distribution describing the confidence of the decision maker in the levels of attributes for which there is a high degree of uncertainty. These probability distributions can be used in conjunction with MAUA to provide a consistent framework for making materials selection decisions. Furthermore, the use of these techniques extends beyond the problem of materials selection into the more speculative areas of materials competitiveness and market demand in cases involving new, unproven technologies.  相似文献   

14.
A probabilistic approach for representation of interval uncertainty   总被引:1,自引:0,他引:1  
In this paper, we propose a probabilistic approach to represent interval data for input variables in reliability and uncertainty analysis problems, using flexible families of continuous Johnson distributions. Such a probabilistic representation of interval data facilitates a unified framework for handling aleatory and epistemic uncertainty. For fitting probability distributions, methods such as moment matching are commonly used in the literature. However, unlike point data where single estimates for the moments of data can be calculated, moments of interval data can only be computed in terms of upper and lower bounds. Finding bounds on the moments of interval data has been generally considered an NP-hard problem because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data. With numerical examples, we show that the proposed bounding algorithms are scalable in polynomial time with respect to increasing number of intervals. Using the bounds on moments computed using the proposed approach, we fit a family of Johnson distributions to interval data. Furthermore, using an optimization approach based on percentiles, we find the bounding envelopes of the family of distributions, termed as a Johnson p-box. The idea of bounding envelopes for the family of Johnson distributions is analogous to the notion of empirical p-box in the literature. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the computationally expensive nested analysis that is typically required in the presence of interval variables, the proposed probabilistic representation enables inexpensive optimization-based strategies to estimate bounds on an output quantity of interest.  相似文献   

15.
Stochastic multicriteria acceptability analysis (SMAA) is a decision support method that allows representing uncertain, imprecise, and partially missing criteria measurements and preference information as probability distributions. In this paper, we test how the assumed shape of the utility or value function affects the results of SMAA in two different problem settings: identifying the most preferred alternative and ranking all the alternatives. A linear value function has been most frequently applied, because more precise shape information can be difficult to obtain in real-life applications. In this paper, we analyse one past real-life problem and a large number of randomly generated test problems of different size using additive functions of different shape. The shape varies from linear to increasingly concave and convex exponential utility or value functions corresponding to different attitudes on marginal value or risk. The results indicate that in most cases slight non-linearity does not significantly affect the results. The proposed method can be used for evaluating how robust a particular real-life decision problem is with respect to the shape of the function. Based on this information, it is possible to determine how accurately the DMs’ preferences need to be assessed in a particular problem, and if it is possible to assume a simple linear shape.  相似文献   

16.
The high computational cost of evaluating objective functions in electromagnetic optimum design problems necessitates the use of cost-effective techniques. The paper discusses the use of one popular technique, surrogate modelling, with emphasis placed on the importance of considering both the accuracy of, and uncertainty in, the surrogate model. After a brief review of how such considerations have been made in the single-objective optimisation of electromagnetic devices, their use with kriging surrogate models is investigated. Traditionally, space-filling experimental designs are used to construct the initial kriging model, with the aim of maximising the accuracy of the initial surrogate model, from which the optimisation search will start. Utility functions, which balance the predictions made by this model with its uncertainty, are often used to select the next point to be evaluated. In this paper, the performances of several different utility functions are examined, with experimental designs that yield initial kriging models of varying degrees of accuracy. It is found that no advantage is necessarily achieved through a search for optima using utility functions on initial kriging models of higher accuracy, and that a reduction in the total number of objective function evaluations can be achieved if the iterative optimisation search is started earlier with utility functions on kriging models of lower accuracy. The implications for electromagnetic optimum design are discussed  相似文献   

17.
针对复杂极限状态方程可靠度计算问题,提出了基于理论联合分布函数以及2 种近似联合分布函数的结构失效概率蒙特卡罗模拟方法,并给出了计算流程图.采用2 个算例证明了所提方法的有效性.结果表明:所提的失效概率模拟方法的计算精度很高,尤其适用于复杂极限状态方程的可靠度计算问题.2 种联合分布函数近似构造方法得到的失效概率精度相当,近似方法与精确方法结果的差异随失效概率的减小而增大,而且随着变量间相关性的增加而增加.当失效概率小于10-3时,近似方法的失效概率误差较大.  相似文献   

18.
In order to perform a fatigue-life analysis of structures the parameters of the structure loading spectra must be assessed. If the load time series are counted using a two-parametric rainflow counting method, the structure loading spectrum provides a probability for the occurrence of a load-cycle with certain amplitude and mean values. It is beneficial for the prediction of the fatigue life to describe the loading spectrum by a continuous function. We have previously discovered that mixtures of Gaussian probability density functions can be used to model the loading spectra. The main problems of this approach that have not been satisfactorily resolved before are related to the estimation of the number of components in the applied mixture models, and to the modelling of the load-cycle distributions with relatively fat tails. In this article, we describe a method for estimating the parameters of mixture models, which allows automatic determination of the number of components in a mixture model. The presented method is applied for modelling simulated and measured loading spectra using mixtures of the multivariate Gaussian or t probability density functions. In the article we also show that the mixture of t probability density functions sometimes better describes the loading spectra than the mixture of Gaussian probability density functions.  相似文献   

19.
20.
In this paper, robust control charts for percentiles based on location‐scale family of distributions are proposed. In the construction of control charts for percentiles, when the underlying distribution of the quality measurement is unknown, we study the problem of discriminating different possible candidate distributions in the location‐scale family of distributions and obtain control charts for percentiles which are insensitive to model mis‐specification. Two approaches, namely, the random data‐driven model selection approach and weighted modeling approach, are used to construct the robust control charts for percentiles in order to effectively monitor the manufacturing process. Monte Carlo simulation studies are conducted to evaluate the performance of the proposed robust control charts for various settings with different percentiles, false‐alarm rates, and sample sizes. These proposed procedures are compared in terms of the average run length. The proposed robust control charts are applied to real data sets for the illustration of robustness and usefulness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号