首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a multistage stochastic programming model for strategic capacity planning at a major US semiconductor manufacturer. Main sources of uncertainty in this multi-year planning problem include demand of different technologies and capacity estimations for each fabrication (fab) facility. We test the model using real-world scenarios requiring the determination of capacity planning for 29 technology categories among five fab facilities. The objective of the model is to minimize the gaps between product demands and the capacity allocated to the technology specified by each product. We consider two different scenario-analysis constructs: first, an independent scenario structure where we assume no prior information and the model systematically enumerates possible states in each period. The states from one period to another are independent from each other. Second, we consider an arbitrary scenario construct, which allows the planner to sample/evaluate arbitrary multi-period scenarios that captures the dependency between periods. In both cases, a scenario is defined as a multi-period path from the root to a leaf in the scenario tree. We conduct intensive computational experiments on these models using real data supplied by the semiconductor manufacturer. The purpose of our experiments is two-fold: first to examine different degree of scenario aggregation and its effects on the independent model to achieve high-quality solution. Using this as a benchmark, we then compare the results from the arbitrary model and illustrate the different uses of the two scenario constructs. We show that the independent model allows a varying degree of scenario aggregation without significant prior information, while the arbitrary model allows planners to play out specific scenarios given prior information.  相似文献   

2.
Karabuk  Suleyman  Wu  S. David 《IIE Transactions》2002,34(9):743-759
Semiconductor capacity planning is a cross-functional decision that requires coordination between the marketing and manufacturing divisions. We examine the main issues of a decentralized coordination scheme in a setting observed at a major US semiconductor manufacturer: marketing managers reserve capacity from manufacturing based on product demands, while attempting to maximize profit; manufacturing managers allocate capacity to competing marketing managers so as to minimize operating costs while ensuring efficient resource utilization. This cross-functional planning problem has two important characteristics: (i) both demands and capacity are subject to uncertainty; and (ii) all decision entities own private information while being self-interested. To study the issues of coordination we first formulate the local marketing and the manufacturing decision problem as separate stochastic programs. We then formulate a centralized stochastic programming model (JCA), which maximizes the firm's overall profit. JCA establishes a theoretical benchmark for performance, but is only achievable when all planning information is public. If local decision entities are to keep their planning information private, we submit that the best achievable coordination corresponds to an alternative stochastic model (DCA). We analyze the relationship and the theoretical gap between (JCA) and )DCA), thereby establishing the price of decentralization. Next, we examine two mechanisms that coordinate the marketing and manufacturing decisions to achieve (DCA) using different degrees of information exchange. Using insights from the Auxiliary Problem Principle (APP), we show that under both coordination mechanisms the divisional proposals converge to the global optimal solution of (DCA). We illustrate the theoretic insights using numerical examples as well as a real world case.  相似文献   

3.
Corrosion in complex coupling environments is an important issue in corrosion field, because it is difficult to take into account a large number of environment factors and their interactions. Design of Experiment (DOE) can present a methodology to deal with this difficulty, although DOE is not commonly spread in corrosion field. Thus, modeling corrosion of Ni-Cr-Mo-V steel in deep sea environment was performed in order to provide example demonstrating the advantage of DOE. In addition, an artificial neural network mapping using back-propagation method was developed for Ni-Cr-Mo-V steel such that the ANN model can be used to predict polarization curves under different complex sea environments without experimentation. Furthermore, roles of environment factors on corrosion of Ni-Cr-Mo-V steel in deep sea environment were discussed.  相似文献   

4.
This paper describes an acceleration technique for experimentation by sequential simplex search. A modification of the Spendley, Hext, and Himsworth method, this technique employs a simplex of n + 1 observations in each sequential block of experiments in seeking the optimum for a system involving n independent variables. The objective in applying this technique is to experimentally determine optimum or near-optimum system conditions in a minimum member of sequential experimental blocks. The accelerated technique is shown to achieve near-optimal solutions in one-half to one-third the number of sequential blocks required by the other methods.  相似文献   

5.
Most preset response surface methodology (RSM) designs offer ease of implementation and good performance over a wide range of process and design optimization applications. These designs often lack the ability to adapt the design on the basis of the characteristics of application and experimental space so as to reduce the number of experiments necessary. Hence, they are not cost‐effective for applications where the cost of experimentation is high or when the experimentation resources are limited. In this paper, we present an adaptive sequential response surface methodology (ASRSM) for industrial experiments with high experimentation cost, limited experimental resources, and high design optimization performance requirement. The proposed approach is a sequential adaptive experimentation approach that combines concepts from nonlinear optimization, design of experiments, and response surface optimization. The ASRSM uses the information gained from the previous experiments to design the subsequent experiment by simultaneously reducing the region of interest and identifying factor combinations for new experiments. Its major advantage is the experimentation efficiency such that for a given response target, it identifies the input factor combination (or containing region) in less number of experiments than the classical single‐shot RSM designs. Through extensive simulated experiments and real‐world case studies, we show that the proposed ASRSM method outperforms the popular central composite design method and compares favorably with optimal designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
We formulate and evaluate a Bayesian approach to probabilistic input modeling for simulation experiments that accounts for the parameter and stochastic uncertainties inherent in most simulations and that yields valid predictive inferences about outputs of interest. We use prior information to construct prior distributions on the parameters of the input processes driving the simulation. Using Bayes' rule, we combine this prior information with the likelihood function of sample data observed on the input processes to compute the posterior parameter distributions. In our Bayesian simulation replication algorithm, we estimate parameter uncertainty by independently sampling new values of the input-model parameters from their posterior distributions on selected simulation runs; and we estimate stochastic uncertainty by performing multiple (conditionally) independent runs with each set of parameter values. We formulate performance measures relevant to both Bayesian and frequentist input-modeling techniques, and we summarize an experimental performance evaluation demonstrating the advantages of the Bayesian approach.  相似文献   

7.
This paper presents a practical method for load reconstruction on an advanced grid-stiffened (AGS) composite plate. With this method, separate AGS ribs are smeared onto a continuous unsymmetrical plate. A forward response approximate model is then built to describe the dynamic response of the plate. Numerical verification indicates that the proposed model can simulate the structure with reasonable accuracy and high computing speed. We also adopted an optimization technology to recover the load history and location. The load history is recovered with a smoothing/filter algorithm and the load location is estimated with a linear search for the lowest value of a defined figure of merit J, which measures the difference between the calculated response and the measured response. The feasibility of the reconstruction technology has been verified by numerical experiments in which good agreements were obtained. Since it calculates rapidly, it is possible to use the proposed method to develop an automatic system to monitor environmental conditions causing emergency in real time.  相似文献   

8.
We suggest an experimentation strategy for the robust design of empirically fitted models. The suggested approach is used to design experiments that minimize the variance of the optimal robust solution. The new design-of-experiment optimality criterion, termed Vs-optimal, prioritizes the estimation of a model's coefficients, such that the variance of the optimal solution is minimized by the performed experiments. It is discussed how the proposed criterion is related to known optimality criteria. We present an analytical formulation of the suggested approach for linear models and a numerical procedure for higher-order or nonpolynomial models. In comparison with conventional robust-design methods, our approach provides more information on the robust solution by numerically generating its multidimensional distribution. Moreover, in a case study, the proposed approach results in a better robust solution in comparison with these standard methods.  相似文献   

9.
A proposed heuristic has been built to determine the optimal operating parameters for high-speed milling machines using permanent magnet brushless DC linear motors (PMBDCLM) as CNC feed drives. Design of experiments (DOE) has been set up and experimentation was conducted that resulted in measurement of forces, dimensional and geometrical tolerances (position, runout, total runout, circularity, cylindricity, straightness, parallelism, flatness, and angularity), and surface finish (kurtosis, skewness, spacing, wavelength, and peak-to-valley height) data for contour and straight (or taper) operation. Based on the DOE results, a knowledge base has been built. In addition, different relationships between force versus spindle speed and feed rate, and versus tolerance and surface finish indices were generated. A set of decision rules has been applied to the knowledge base to provide optimal operating parameters that meet user-specified tolerances and surface finishes. Application to a mechanical part is illustrated.  相似文献   

10.
Plante  Robert 《IIE Transactions》2002,34(6):565-571
The determination of tolerance allocations among design parameters is an integral phase of product/process design. Such allocations are often necessary to achieve desired levels of product performance. Parametric and nonparametric methods have recently been developed for allocating multivariate tolerances. Parametric methods assume full information about the probability distribution of design parameter processes, whereas, nonparametric methods assume that only partial information is available, which consists of only design parameter process variances. These methods currently assume that the relationship between the design parameters and each of the performance measures is linear. However, quadratic response functions are increasingly being used to provide better approximations of the relationships between performance measures and design parameters. This is especially prevalent where there is a multivariate set of performance measures that are functions of a common set of design parameters. In this research we propose both parametric and nonparametric multivariate tolerance allocation procedures which consider the more general case where these relationships can be represented by quadratic functions of the design parameters. We develop the corresponding methodology and nonlinear optimization models to accommodate and take advantage of the presence of interactions and other nonlinearities among suppliers.  相似文献   

11.
12.
FU  MICHAEL C.  HILL  D. 《IIE Transactions》1997,29(3):233-243
We investigate the use of simultaneous perturbation stochastic approximation for the optimization of discrete-event systems via simulation. Application of stochastic approximation to simulation optimization is basically a gradient-based method, so much recent research has focused on obtaining direct gradients. However, such procedures are still not as universally applicable as finite-difference methods. On the other hand, traditional finite-difference-based stochastic approximation schemes require a large number of simulation replications when the number of parameters of interest is large, whereas the simultaneous perturbation method is a finite-difference-like method that requires only two simulations per gradient estimate, regardless of the number of parameters of interest. This can result in substantial computational savings for large-dimensional systems. We report simulation experiments conducted on a variety of discrete-event systems: a single-server queue, a queueing network, and a bus transit network. For the single-server queue, we also compare our work with algorithms based on finite differences and perturbation analysis.  相似文献   

13.
In this paper, we consider a production system consisting of multiple tandem machines subject to random failures. The objective of the study is to find the production rates of the machines in order to minimize the total inventory and backlog costs. By combining analytical formalism and simulation-based statistical tools such as design of experiments (DOE) and response surface methodology (RSM), an approximation of the optimal control policy is obtained. The combined discrete/continuous simulation modeling is used to obtain an estimate of the cost in a fraction of the time necessary for discrete event simulation by reducing the number of events related to parts production. This is achieved by replacing the discrete dynamics of part production by a set of differential equations that describe this process. This technique makes it possible to tackle optimization problems that would otherwise be too time consuming. We provide some numerical examples of optimization and compare computational times between discrete event and discrete/continuous simulation modeling. The proposed combination of DOE, RSM and combined discrete/continuous simulation modeling allows us to obtain the optimization results in a fairly short time period on widely available computer resources.  相似文献   

14.
15.
Israel Prototype testing and experimentation play a key role in the development of new products. It is common practice to build a single prototype product and then test it at specified operating conditions. It is often beneficial, however, to make several variants of a prototype according to a fractional factorial design. The information obtained can be important in comparing design options and improving product performance and quality. In such experiments the response of interest is often not a single number but a performance curve over the test conditions. In this article we develop a general method for the design and analysis of prototype experiments that combines orthogonal polynomials with two-level fractional factorials. The proposed method is simple to use and has wide applicability. We explain our ideas by reference to an experiment reported by Taguchi on carbon monoxide exhaust of combustion engines. We then apply them to an experiment on a prototype fluid-flow controller.  相似文献   

16.
Robust Design is an important method for improving product quality, manufacturability, and reliability at low cost. Taguchi's introduction of this method in 1980 to several major American industries resulted in significant quality improvement in product and manufacturing process design. While the robust design objective of making product performance insensitive to hard-to-control noise was recognized to be very important, many of the statistical methods proposed by Taguchi, such as the use of signal-to-noise ratios, orthogonal arrays, linear graphs, and accumulation analysis, have room for improvement. To popularize me use of robust design among engineers, it is essential to develop more effective, statistically efficient, and user-friendly tech niques and tools. This paper first summarizes the statistical methods for planning and analyzing robust design experiments originally proposed by Taguchi; then reviews newly developed statistical methods and identifies areas and problems where more research is needed. For planning experiments, we review a new experiment format, the combined array format, which can reduce the experiment size and allow greater flexibility for estimating effects which may be more important for physical reasons. We also discuss design strategies, alternative graphical tools and tables, and computer algorithms to help engineers plan more efficient experi ments. For analyzing experiments, we review a new modeling approach, die response model approach, which yields additional information about how control factor settings dampen the effects of individual noise factors; this helps engineers better under stand die physical mechanism of the product or process. We also discuss alternative variability measures for Taguchi's signal-to-noise ratios and develop methods for empirically determining the appropriate measure to use.  相似文献   

17.
It has been over ten years since the pioneering work of particle swarm optimization (PSO) espoused by Kennedy and Eberhart. Since then, various modifications, well suited to particular application areas, have been reported widely in the literature. The evolutionary concept of PSO is clear-cut in nature, easy to implement in practice, and computationally efficient in comparison to other evolutionary algorithms. The above-mentioned merits are primarily the motivation of this article to investigate PSO when applied to continuous optimization problems. The performance of conventional PSO on the solution quality and convergence speed deteriorates when the function to be optimized is multimodal or with a large problem size. Toward that end, it is of great practical value to develop a modified particle swarm optimizer suitable for solving high-dimensional, multimodal optimization problems. In the first part of the article, the design of experiments (DOE) has been conducted comprehensively to examine the influences of each parameter in PSO. Based upon the DOE results, a modified PSO algorithm, termed Decreasing-Weight Particle Swarm Optimization (DW-PSO), is addressed. Two performance measures, the success rate and number of function evaluations, are used to evaluate the proposed method. The computational comparisons with the existing PSO algorithms show that DW-PSO exhibits a noticeable advantage, especially when it is performed to solve high-dimensional problems.  相似文献   

18.
We present a new model for reliability analysis that is able to employ condition monitoring data in order to simultaneously monitor the latent degradation level and track failure progress over time. The method presented in this paper is a bridge between Bayesian filtering and classical binary classification, both of which have been employed successfully in various application domains. The Kalman filter is used to model a discrete-time continuous-state degradation process that is hidden and for which only indirect information is available through a multi-dimensional observation process. Logistic regression is then used to connect the latent degradation state with the failure process that is itself a discrete-space stochastic process. We present a closed-form solution for the marginal log-likelihood function and provide formulas for few important reliability measures. A dynamic cost-effective maintenance policy is finally introduced that can employ sensor signals for real-time decision-making. We finally demonstrate the accuracy and usefulness of our framework via numerical experiments.  相似文献   

19.
We propose an improved version of the earlier developed optical arrangement for generating inhomogeneously polarized laser light modes with the aid of a diffractive optical element (DOE) with carrier frequency. By eliminating lenses from the optical arrangement, we achieve the miniaturization, reduced light losses, a smaller number of parameters being matched, and a simpler system adjustment procedure. Note that all the capabilities of the previous version, namely, the universality and simple readjustment to different polarization types, are fully retained. The numerical modeling of the polarization mode combiner has made it possible to analyze its performance and capabilities. In the experiments, the quality of the resulting beams is shown to be improved. For generating higher-order cylindrical beams, a lower-order mode at the output of the polarization mode combiner is additionally transformed with a DOE that operates in the zero diffraction order, introducing radial phase changes.  相似文献   

20.
Quality and reliability design practitioners have utilized Taguchi's methodology of matrix experimentation to generate computer simulation data for characterizing performance variation of product designs. However, the sampling strategy employed renders computer implementation of matrix experimentation cumbersome and statistically invalid. Weaknesses of this approach also include sample size limitation and overestimation of performance variation. An alternative approach that combines Monte Carlo simulation with the strategies of independent sampling across runs and correlated sampling between runs is presented. An application case study shows that the proposed approach constitutes an improvement on the matrix approach with respect to statistical validity and estimation accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号