共查询到20条相似文献,搜索用时 0 毫秒
1.
A stochastic cross‐efficiency data envelopment analysis approach for supplier selection under uncertainty 下载免费PDF全文
Mariagrazia Dotoli Nicola Epicoco Marco Falagario Fabio Sciancalepore 《International Transactions in Operational Research》2016,23(4):725-748
This paper addresses one of the key objectives of the supply chain strategic design phase, that is, the optimal selection of suppliers. A methodology for supplier selection under uncertainty is proposed, integrating the cross‐efficiency data envelopment analysis (DEA) and Monte Carlo approach. The combination of these two techniques allows overcoming the deterministic feature of the classical cross‐efficiency DEA approach. Moreover, we define an indicator of the robustness of the determined supplier ranking. The technique is able to manage the supplier selection problem considering nondeterministic input and output data. It allows the evaluation of suppliers under uncertainty, a particularly significant circumstance for the assessment of potential suppliers. The novel approach helps buyers in choosing the right partners under uncertainty and ranking suppliers upon a multiple sourcing strategy, even when considering complex evaluations with a high number of suppliers and many input and output criteria. 相似文献
2.
不确定控制系统概率鲁棒性分析——自适应重要抽样法 总被引:2,自引:0,他引:2
将自适应重要抽样(AIS)法应用于不确定控制系统的概率鲁棒性分析问题,克服了标准MonteCarlo仿真(MCS)方法不能有效解决小概率事件的困难.给出了一种新的AIS方案.首先采用了一种递归的估计条件众数算法来产生一组使得系统不稳定或性能不可接受的不确定参数向量样本.然后利用这组样本来估计初始高斯型重要抽样密度函数的参数,并执行随后的迭代仿真过程.仿真结果验证了该方法的有效性. 相似文献
3.
研究了企业在实际应用质量功能配置过程中由于资源约束条件,无法完全满足所有顾客需求重要度的问题。通过对传统的层次分析法进行改进,提出基于Monte Carlo-AHP方法的顾客需求重要性排序方法,对排序靠后的顾客需求项进行取舍以满足企业资源约束;然后采用数据包络分析(DEA)模型计算企业考虑竞争差异时的相对效率,根据相对效率与应用AHP方法计算得到的基本重要度,确定最终顾客需求重要度。最后,给出了该方法的应用实例。 相似文献
4.
Two algorithms, and corresponding Fortran computer programs, for the computation of posterior moments and densities using the principle of importance sampling are described in detail. The first algorithm makes use of a multivariate Student t importance function as approximation of the posterior. It can be applied when the integrand is moderately skew. The second algorithm makes use of a decomposition: a multivariate normal importance function is used to generate directions (lines) and one-dimensional classical quadrature is used to evaluate the integrals defined on the generated lines. The second algorithm can be used in cases where the integrand is possibly very skew in any direction. 相似文献
5.
6.
This work focuses on the fast computation of the moment-independent importance measure δi. We first analyse why δi is associated with a possible computational complexity problem. One of the reasons that we thought of is the use of two-loop Monte Carlo simulation, because its rate of convergence is O(N−1/4), and another one is the computation of the norm of the difference between a density and a conditional density. We find that these problems are nonessential difficulties and try to give associated improvements. A kernel estimate is introduced to avoid the use of two-loop Monte Carlo simulation, and a moment expansion of the associated norm which is not simply obtained by using the Edgeworth series is proposed to avoid the density estimation. Then, a fast computational method is introduced for δi. In our method, all δi can be obtained by using a single sample set. From the comparison of the numerical error analyses, we believe that the proposed method is clearly helpful for improving computational efficiency. 相似文献
7.
8.
I.M. Sobol' 《Computer Physics Communications》2010,181(7):1212-1217
A new derivative based criterion τy for groups of input variables is presented. It is shown that there is a link between global sensitivity indices and the new derivative based measure. It is proved that small values of derivative based measures imply small values of total sensitivity indices. However, for highly nonlinear functions the ranking of important variables using derivative based importance measures can be different from that based on the global sensitivity indices. The computational costs of evaluating global sensitivity indices and derivative based measures, are compared and some important tests are considered. 相似文献
9.
Mathematical modeling of plant growth has gained increasing interest in recent years due to its potential applications. A general family of models, known as functional–structural plant models (FSPMs) and formalized as dynamic systems, serves as the basis for the current study. Modeling, parameterization and estimation are very challenging problems due to the complicated mechanisms involved in plant evolution. A specific type of a non-homogeneous hidden Markov model has been proposed as an extension of the GreenLab FSPM to study a certain class of plants with known organogenesis. In such a model, the maximum likelihood estimator cannot be derived explicitly. Thus, a stochastic version of an expectation conditional maximization (ECM) algorithm was adopted, where the E-step was approximated by sequential importance sampling with resampling (SISR). The complexity of the E-step creates the need for the design and the comparison of different simulation methods for its approximation. In this direction, three variants of SISR and a Markov Chain Monte Carlo (MCMC) approach are compared for their efficiency in parameter estimation on simulated and real sugar beet data, where observations are taken by censoring plant’s evolution (destructive measurements). The MCMC approach seems to be more efficient for this particular application context and also for a large variety of crop plants. Moreover, a data-driven automated MCMC–ECM algorithm for finding an appropriate sample size in each ECM step and also an appropriate number of ECM steps is proposed. Based on the available real dataset, some competing models are compared via model selection techniques. 相似文献
10.
David Fernando Muoz David Gonzalo Muoz Adn Ramírez‐Lpez 《International Transactions in Operational Research》2013,20(4):493-513
The main purpose of this paper is to discuss how a Bayesian framework is appropriate to incorporate the uncertainty on the parameters of the model that is used for demand forecasting. We first present a general Bayesian framework that allows us to consider a complex model for forecasting. Using this framework, we specialize (for simplicity) in the continuous‐review system to show how the main performance measures that are required for inventory management (service levels and reorder points) can be estimated from the output of simulation experiments. We discuss the use of two estimation methodologies: posterior sampling (PS) and Markov chain Monte Carlo (MCMC). We show that, under suitable regularity conditions, the estimators obtained from PS and MCMC satisfy a corresponding Central Limit Theorem, so that they are consistent, and the accuracy of each estimator can be assessed by computing an asymptotically valid half‐width from the output of the simulation experiments. This approach is particularly useful when the forecasting model is complex in the sense that analytical expressions to obtain service levels and/or reorder points are not available. 相似文献
11.
In recent years several approaches have been proposed to overcome the multiple-minima problem associated with nonlinear optimization techniques used in the analysis of molecular conformations. One such technique based on a parallel Monte Carlo search algorithm is analyzed. Experiments on the Intel iPSC/2 confirm that the attainable parallelism is limited by the underlying acceptance rate in the Monte Carlo search. It is proposed that optimal performance can be achieved in combination with vector processing. Tests on both the IBM 3090 and Intel iPSC/2-VX indicate that vector performance is related to molecule size and vector pipeline latency. 相似文献
12.
This paper concerns the application of copula functions in VaR valuation. The copula function is used to model the dependence structure of multivariate assets. After the introduction of the traditional Monte Carlo simulation method and the pure copula method we present a new algorithm based on mixture copula functions and the dependence measure, Spearman’s rho. This new method is used to simulate daily returns of two stock market indices in China, Shanghai Stock Composite Index and Shenzhen Stock Composite Index, and then empirically calculate six risk measures including VaR and conditional VaR. The results are compared with those derived from the traditional Monte Carlo method and the pure copula method. From the comparison we show that the dependence structure between asset returns plays a more important role in valuating risk measures comparing with the form of marginal distributions. 相似文献
13.
The Markowitz’s mean-variance (M-V) model has received widespread acceptance as a practical tool for portfolio optimization, and his seminal work has been widely extended in the literature. The aim of this article is to extend the M-V method in hybrid decision systems. We suggest a new Chance-Variance (C-V) criterion to model the returns characterized by fuzzy random variables. For this purpose, we develop two types of C-V models for portfolio selection problems in hybrid uncertain decision systems. Type I C-V model is to minimize the variance of total expected return rate subject to chance constraint; while type II C-V model is to maximize the chance of achieving a prescribed return level subject to variance constraint. Hence the two types of C-V models reflect investors’ different attitudes toward risk. The issues about the computation of variance and chance distribution are considered. For general fuzzy random returns, we suggest an approximation method of computing variance and chance distribution so that C-V models can be turned into their approximating models. When the returns are characterized by trapezoidal fuzzy random variables, we employ the variance and chance distribution formulas to turn C-V models into their equivalent stochastic programming problems. Since the equivalent stochastic programming problems include a number of probability distribution functions in their objective and constraint functions, conventional solution methods cannot be used to solve them directly. In this paper, we design a heuristic algorithm to solve them. The developed algorithm combines Monte Carlo (MC) method and particle swarm optimization (PSO) algorithm, in which MC method is used to compute probability distribution functions, and PSO algorithm is used to solve stochastic programming problems. Finally, we present one portfolio selection problem to demonstrate the developed modeling ideas and the effectiveness of the designed algorithm. We also compare the proposed C-V method with M-V one for our portfolio selection problem via numerical experiments. 相似文献
14.
Chien-Tai Lin Cheng-Chieh ChouYen-Lung Huang 《Computational statistics & data analysis》2012,56(3):451-467
Recently, progressive hybrid censoring schemes have become quite popular in life-testing and reliability studies. In this paper, we investigate the maximum likelihood estimation and Bayesian estimation for a two-parameter Weibull distribution based on adaptive Type-I progressively hybrid censored data. The Bayes estimates of the unknown parameters are obtained by using the approximation forms of Lindley (1980) and Tierney and Kadane (1986) as well as two Markov Chain Monte Carlo methods under the assumption of gamma priors. Computational formulae for the expected number of failures is provided and it can be used to determine the optimal adaptive Type-I progressive hybrid censoring schemes under a pre-determined budget of experiment. 相似文献
15.
Markov chain Monte Carlo (MCMC) techniques revolutionized statistical practice in the 1990s by providing an essential toolkit for making the rigor and flexibility of Bayesian analysis computationally practical. At the same time the increasing prevalence of massive datasets and the expansion of the field of data mining has created the need for statistically sound methods that scale to these large problems. Except for the most trivial examples, current MCMC methods require a complete scan of the dataset for each iteration eliminating their candidacy as feasible data mining techniques.In this article we present a method for making Bayesian analysis of massive datasets computationally feasible. The algorithm simulates from a posterior distribution that conditions on a smaller, more manageable portion of the dataset. The remainder of the dataset may be incorporated by reweighting the initial draws using importance sampling. Computation of the importance weights requires a single scan of the remaining observations. While importance sampling increases efficiency in data access, it comes at the expense of estimation efficiency. A simple modification, based on the rejuvenation step used in particle filters for dynamic systems models, sidesteps the loss of efficiency with only a slight increase in the number of data accesses.To show proof-of-concept, we demonstrate the method on two examples. The first is a mixture of transition models that has been used to model web traffic and robotics. For this example we show that estimation efficiency is not affected while offering a 99% reduction in data accesses. The second example applies the method to Bayesian logistic regression and yields a 98% reduction in data accesses. 相似文献
16.
This paper considers the consensus tracking synthesised with transient performance improvement problem for a network of unmanned aerial vehicles with faults. In practical situations, the parameter variations, modelling errors and disturbances are of particular interest. Thus, it is assumed that the dynamics of vehicles are subject to parameter uncertainty. As the main contribution of this paper, a set of fault-estimator-based protocols is proposed to drive the overall network performance below the given H ∞ synthesised with transient performance index. Sufficient conditions for designing the protocols which utilise the relative output information among neighbouring vehicles are given by applying the robust control theory. Simulations are performed to validate the proposed results. 相似文献
17.
Takahisa Kawai Yousuke Tadokoro Taro Hayashi Jin Yoshimura 《International journal of systems science》2013,44(10):947-957
Spatial and temporal changes in fashion are very complicated in an information-oriented society. In this article, we introduce a lattice model of fashion which is composed of two competing trends. Simulation is carried out by two methods: local and global interactions. In the former case interaction occurs between adjacent lattice sites, while in the latter it occurs between any pair of lattice sites. Computer simulations reveal that the fashion is more prevalent for global interaction than for local interaction. Various spatial patterns in fashion are analysed by both auto- and cross-correlations. We find universally a power law which leads to collective behaviours of fashion: if the number of people with a trend is extremely decreased, they are localised in scattered very small areas. For the producer/maker of fashion, global transmission is far more important than local dispersal. 相似文献
18.
开展了基于蒙特卡洛法(MCM )的噪声参数测量不确定度评定工作,以等效噪声参数方程作为不确定度评定的测量模型,利用从测量系统获得的物理量求解函数方程,得到反映等效噪声参数分布情况的数据,并从等效噪声参数导出噪声参数及其概率分布,最后得到噪声参数测量的不确定度。该方法有效地结合了数学随机仿真、物理测量边界判据,实现了噪声参数测量结果的不确定度评定,该方法符合国际标准,可应用的测量系统种类多,普适性高。最后通过实验数据验证,证明了评定方法的有效性,不确定度评定数据的真实性和可靠性。 相似文献
19.
Alireza Fallah Ehsan Jabbari Reza Babaee 《Computers & Mathematics with Applications》2019,77(3):815-829
In this research, the Kansa or Multiquadric method (MQ) has been developed for solving the seepage problems in 2D and 3D arbitrary domains. This research is the first application of this method for seepage analysis in both confined and unconfined porous media. The domain decomposition approach has been employed for applying MQ method easily in inhomogeneous and irregular complex geometries and decreasing the computational costs. For determining the optimum shape parameter that affects strongly the accuracy of MQ and other RFB methods, a new scheme that decreases drastically the computational time is introduced. The efficiency of the proposed algorithm has been examined under various radial basis functions, variations of number of interpolating points and points distribution, through a numerical example with analytical solution. Eventually, three examples including different boundary conditions are presented. Comparing results of the examples with other numerical methods indicates that the present approach has high capability and accuracy in solving seepage problems. 相似文献
20.
Changliang Gao Yongjian Yang Boshun Han 《Computers & Mathematics with Applications》2011,62(6):2393-2403
The filled function method is an efficient approach for finding global minimizers of multi-dimensional and nonlinear functions in the absence of any restrictions. In this paper, we give a new definition of filled function and the idea of constructing a new filled function, and then a new class of filled functions with one parameter on the basis of the new definition, which possesses better quality, is presented. Theoretical properties of the new class of filled functions are investigated. A new algorithm is developed from the new filled function method. The implementation of the algorithm on seven test problems with dimensions up to 30 is reported, and comparisons with other filled function methods demonstrate that the new algorithm is more efficient. 相似文献