首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper develops a robust and distributed decision-making procedure for mathematically modeling and computationally supporting simultaneous decision-making by members of an engineering team. The procedure (1) treats variations in the design posed by other members of the design team asconceptual noise; (2) incorporates such noise factors into conceptually robust decision-making; (3) provides preference information to other team members on the variables they control; and (4) determines whether to execute the conceptually robust decision or to wait for further design certainty. While Changet al. (1994) extended Taguchi's approach to such simultaneous decision-making, this paper uses a continuous formulation and discusses the foundations of the procedure in greater detail. The method is demonstrated by a simple distributed design process for a DC motor, and the results are compared with those obtained for the same problem using sequential decision strategies in Krishnanet al. (1991).  相似文献   

2.
Statistical process modeling is widely used in industry for forecasting the production outcomes, for process control and for process optimization. Applying a prediction model in a production process allows the user to calibrate/predict the mean of the distribution of the process outcomes and to partition the overall variation in the distribution of the process outcomes into explained (by the model) and unexplained (residuals) variations; thus, reducing the unexplained variability. The additional information about the process behavior can be used prior to the sampling procedure and may help to reduce the required sample size to classify a lot. This research focuses on the development of a model‐based sampling plan based ontextitCpk (process capability index). It is an extension of a multistage acceptance sampling plan also based on Cpk (Negrin et al., Quality Engineering 2009; 21 :306–318; Quality and Reliability Engineering International 2011; 27 :3–14). The advantage of this sampling plan is that the sample size needed depends directly and quantitatively on the quality of the process (Cpk), whereas other sampling plans such as MIL‐STD‐414 (Sampling Procedures and Tables for Inspection by Variables for Percent Defective, Department of Defense, Washington, DC, 1957.) use only qualitative measures. The objective of this paper is to further refine the needed sample size by using a predictive model for the lot's expectation. We developed model‐based sample size formulae which depend directly on the quality of the prediction model (as measured by R2) and adjust the ‘not model‐based’ multistage sampling plan developed in Negrin et al. (Quality Engineering 2009; 21 :306–318; Quality and Reliability Engineering International 2011; 27 :3–14) accordingly. A simulation study was conducted to compare between the model‐based and the ‘not model‐based’ sampling plans. It is found that when R2 = 0, the model‐based and ‘not model‐based’ sampling plans require the same sample sizes in order to classify the lots. However, as R2 becomes larger, the sample size required by the model‐based sampling plan becomes smaller than the one required by the ‘not model‐based’ sampling plan. In addition, it is found that the reduction of the sample size achieved by the model‐based sampling plan becomes more significant as Cpk tends to 1 and can be achieved without increasing the proportion of the classification errors. Finally, the suggested sampling plan was applied with areal data set from a chemicals manufacturing process for illustration. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
ABSTRACT

There is a recognized need to advance cumulative effects assessment to regional and ecologically meaningful scales, but such initiatives are often critiqued for being isolated from management contexts and the regulatory practices of project-based environmental assessment. A major challenge is that there has been limited attention devoted to understanding decision-making at the project level, and the value of monitoring data to support cumulative effects analysis. This article examines how cumulative effects are considered during environmental assessment decision-making within the context of freshwater management in the Mackenzie Valley, Northwest Territories. Interviews with representatives from organizations involved in environmental assessment, regulation, and monitoring are used to identify challenges to applying information about cumulative effects at the project scale. Results reinforce the need for regional approaches and improvements in information and monitoring capacities to support cumulative effects analysis, but also the need to address institutional and organizational deficiencies to ensure that the data and information generated are useful to and applied within project-based decision-making.  相似文献   

4.
5.
The purpose of this article is to test the performance of a heuristic algorithm that computes a quality control plan. The objective of the tests reported in this paper is twofold: (1) to compare the proposed heuristic algorithm (HA) to an optimal allocation (OA) method; and (2) to analyse the behaviour and limitations of the proposed HA on a scale-1 test with a before/after test. The method employed to evaluate this algorithm is based on comparisons: 1. The first test illustrates the method and its sensitivity to internal parameters. It is based on a simplified case study of a product from the semiconductor industry. The product is made up of 1000, 800 and 1200 wafers incorporating three different technologies. The production duration is 1 week, and three tools were involved in this test. The behaviour of the proposed algorithm is checked throughout the evolution of the model parameters: risk exposure limit (RL ) and measurement capacity (P). The quality control plan for each tool and product are analysed and compared to those from a one stage allocation process (named C 0) that does not take into account risk exposure considerations. A comparison is also performed with OA.

2. The second scale-1 test is based on three scenarios of several months of regular semiconductor production. Data were obtained from 23 etching and 12 photolithographical tools. The outputs provided by the HA are used in the sampling scheduler implemented at this plant. The resulting samples are compared against three indicators.

The results of these comparisons show that, for small instances, OA is more relevant than the HA method. The HA provides realistic limits that are suitable for daily operations. Even though the HA may provide far from optimal results, it demonstrates major MAR improvement. In terms of the maximum inhibit limit, the HA achieves better performances than C 0, and they are strongly correlated to RL and to the control capacity. The article concludes that the proposed algorithm can be used to plan controls and to guide their scheduling. It can also improve the insurance design for several levels of acceptance of risk.  相似文献   

6.
Multivariate exponentially weighted moving average (MEWMA) control chart with five different estimators as population covariance matrix is rarely applied to monitor small fluctuations in the statistical process control. In this article, mathematical models of the five estimators (S1, S2, S3, S4, S5) are established, with which the relevant MEWMA control charts are obtained, respectively. Thereafter, the process monitoring performance of the five control charts is simulated. And the simulation results show that the S4 estimator-based MEWMA control chart is of the best performance both in step offset failure mode and ramp offset failure mode. Since the inline process monitoring of photovoltaic manufacturing is intended to be a problem of multivariate statistics process analysis, the feasibility and effectiveness of the proposed model are elaborated in the case study during the cell testing and sorting process control for the fabrication of multicrystalline silicon solar cells.  相似文献   

7.
In this article, the problem of choosing from a set of design alternatives based upon multiple, conflicting, and uncertain criteria is investigated. The problem of selection over multiple attributes becomes harder when risky alternatives exist. The overlap measure method developed in this article models two sources of uncertainties—imprecise or risky attribute values provided to the decision maker and inabilities of the decision-maker to specify an exact desirable attribute value. Effects of these uncertainties are mitigated using the overlap measure metric. A subroutine to this method, called the robust alternative selection method, ensures that the winning alternative is insensitive to changes in the relative importance of the different design attributes. The overlap measure method can be used to model and handle various sources of uncertainties and can be applied to any number of multiattribute decision-making methods. In this article, it is applied to the hypothetical equivalents and inequivalents method, which is a multiattribute selection method under certainty.  相似文献   

8.
Quadratic response surface methodology often focuses on finding the levels of some (coded) predictor variables x = (x 1, x 2,…, x k) that optimize the expected value of a response variable y. Typically the experimenter starts from some best guess or “control” combination of the predictors (usually coded to x = 0) and performs an experiment varying them in a region around this center point. The question of interest addressed here is whether any x in the experimental region provides a long-run mean response E(y) preferable to that of the control, and if so, by what amount? This article approaches this question via simultaneous confidence intervals for δ(x) = E(y|x) = E(y|0) for all x within a specified distance of 0. A new method for two or more predictors is introduced that gives sharper intervals than the Scheffé method and also the Sa and Edwards adaptation of the Casella and Strawderman method. The new method does not require a rotatable design and allows for one-sided simultaneous bounds for δ(x). Approximate sample-size savings of the improved method over the Sa and Edwards adaptation of the Casella and Strawderman method ranged from 12%–45% for two-sided intervals and 19%–40% for one-sided intervals for designs with two or three predictors. Approximate sample-size savings of the improved method over the Scheffe method ranged from 14%–47% for two-sided intervals and 22%–62% for one-sided intervals.  相似文献   

9.
The optimization of a two-parameter river catchment simulation model is described. The parameter K1 controls the rate of infiltration into the soil, and the second parameter, K2, is used in the routing equation. The Simplex direct search method is used and is implemented on a hybrid computer. The computer programme forms an n-dimensional “simplex” (n being the number of parameters) from initial trial values of the parameters, and uses the basic operations of reflection, expansion and contraction to find the optimum value of the objective function. The operations are carried out according to the value of the objective function at each apex of the “simplex.” The peripheral devices linked to the hybrid computer are used to give a continuous display of the optimization process, and the effects of systematically varying the parameters are studied by plotting sample hydrographs for various pairs of parameter values and by plotting the surface of the objective function.  相似文献   

10.
Abstract

The performance of reliability inference strongly depends on the modeling of the product’s lifetime distribution. Many products have complex lifetime distributions whose optimal settings are not easily found. Practitioners prefer to use simpler lifetime distribution to facilitate the data modeling process while knowing the true distribution. Therefore, the effects of model mis-specification on the product’s lifetime prediction is an interesting research area. This article presents some results on the behavior of the relative bias (RB) and relative variability (RV) of pth quantile of the accelerated lifetime (ALT) experiment when the generalized Gamma (GG3) distribution is incorrectly specified as Lognormal or Weibull distribution. Both complete and censored ALT models are analyzed. At first, the analytical expressions for the expected log-likelihood function of the misspecified model with respect to the true model is derived. Consequently, the best parameter for the incorrect model is obtained directly via a numerical optimization to achieve a higher accuracy model than the wrong one for the end-goal task. The results demonstrate that the tail quantiles are significantly overestimated (underestimated) when data are wrongly fitted by Lognormal (Weibull) distribution. Moreover, the variability of the tail quantiles is significantly enlarged when the model is incorrectly specified as Lognormal or Weibull distribution. Precisely, the effect on the tail quantiles is more significant when the sample size and censoring ratio are not large enough. Supplementary materials for this article are available online.  相似文献   

11.
ABSTRACT

Tubemakers of Australia Limited is a major Australian company that has developed a strategic planning process based on the principles of total quality management (TQM). The process is highly participative, with input from the board of directors to the shop floor. This approach generates a high level of commitment to the plan at all levels. The plan, however, is not rigid, and each business unit uses its plan as its day-to-day operations manual, reviewing and updating it as often as necessary.

This planning model, as developed by this company, is effective in integrating the planning process into the company's operations. The company is now pursuing a more formalized, coordinated TQM approach, and, in conjunction with this, the planning process is being modified to more closely match the Japanese Policy Deployment (Hoshin) model. The development of an effective strategic planning process in conjunction with an effective TQM operating style should serve as a model for other organizations.  相似文献   

12.
Abstract

The purpose of this paper is to examine how Regulatory Impact Assessment (RIA) can contribute to decision-making processes of Official Development Assistance (ODA) loans and grants. The point of departure for the discussion is the phenomenon that RIA, within a context of ODA, is applied by International Finance Institutions mainly in the context of Development Policy Loans, to introduce or strengthen country systems for Regulatory Impact Assessment. However, ODA grants, and loans, particularly when specific policy or regulatory conditions are attached to them, significantly impact economic and social conditions within the beneficiary country. This article examines what role RIA can play in facilitating a coherent decision-making process affecting the ODA allocation within a context of conditionalities requiring the introduction of new, or changes to existing, policies and regulations. The discussion considers the nexus between development aid effectiveness, conditionality and ownership, and RIA. The article argues a justification for applying RIA to ODA loans and grants which carry regulatory and policy conditionalities.  相似文献   

13.
This paper presents an improvement on earlier work on a common weight multi-criteria decision-making (MCDM) approach for technology selection by (Karsak, E.E. and Ahiska, S.S., Practical common weight multi-criteria decision-making approach with an improved discriminating power for technology selection. Int. J. Prod. Res., 2005, 43, 1537–1554.) benefiting from a bisection search algorithm. The proposed algorithm enables to calculate the values of discriminating parameter, k, which appears in the introduced efficiency measure, in a systematic and robust manner rather than requiring the decision analyst to assign an arbitrary step size value. In addition, the paper presents comments on the model proposed by (Amin, G.R., Toloo, M. and Sohrabis, B., An improved MCDM DEA model for technology selection. Int. J. Prod. Res., 2006, 44, 2681–2686.) for technology selection. Finally, the robustness of the proposed decision-making framework is illustrated via several numerical examples taken from the above-mentioned papers.  相似文献   

14.
ABSTRACT

Understanding the variability of trends and other continuously distributed quantities is a vital ability underlying many safety critical decisions, such as how widely to search for a downed aircraft, or whether to prepare for evacuation in the face of an uncertain hurricane or hurricane track. We first review the sparse research on this topic which indicates a general systematic tendency to underestimate such variability, akin to overconfidence in the precision of prediction. However, the magnitude of such underestimation varies across experiments and research paradigms. Based on these existing findings, and other known biases and vulnerabilities of the perception and cognition of multiple instances, we define the core elements of a computational model that can itself predict three measures of performance in variability estimation: bias (to over or underestimate variability), sensitivity (to variability differences) and precision (of variability judgements). Factors and approximate weighting in influencing these measures are then identified regarding attention, the number of instances across whose variability is estimated, the time delay affecting the memory system employed, familiarity of material, the anchoring heuristic and the method of judgement. These are then incorporated into foundations for a linear additive model.  相似文献   

15.
Rajagopalan and Irani (Some comments on Malakooti et al. ‘Integrated group technology, cell formation, process planning, and production planning with application to the emergency room’. Int. J. Prod. Res., 2006, 44, 2265--2276.) provide a critique of Malakooti et al. (Integrated group technology, cell formation, process and production planning with application to the emergency room. Int. J. Prod. Res., 2004, 42, 1769–1786.) integrated cell/process/capacity formation (ICPCF) approach and suggest an improved method for solving the ICPCF problem. Rajagopalan and Irani (2006 Rajagopalan, RIrani, SA. 2006. Integrated group technology, cell formation, process planning, and production planning with application to the emergency room. Int. J. Prod. Res., 44: 22652276. Some comments on Malakooti[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) attempt to solve the emergency room layout problem presented in Malakooti et al. (2004) and claim to have obtained an improved solution from their approach (hybrid flowshop layout). Although there are certain advantages of considering Rajagopalan and Irani's (2006 Rajagopalan, RIrani, SA. 2006. Integrated group technology, cell formation, process planning, and production planning with application to the emergency room. Int. J. Prod. Res., 44: 22652276. Some comments on Malakooti[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) approach, we believe that their approach for solving ICPCF problems have significant shortcomings.  相似文献   

16.
Many peacetime spare parts demand forecasting models have been proposed recently. However, it is difficult to forecast spare parts consumption in wartime. This is due to the complexity and randomness of battle damages. To serve this purpose, we choose a combined army element as study object, and propose a novel method to forecast battle damage-oriented spare parts demand based on wartime influencing factors analysis and ε-Support Vector Regression (ε-SVR). First, we extract the key influencing factors of equipment damages including battlefield environment and fighting capacities of the opposed forces by qualitative analysis, and quantify those factors by combining Delphi technique and fuzzy comprehensive evaluation method. Subsequently, we construct the sample space by using influencing factors of battle damages as the input variables and the corresponding spare parts demand as the output variable, introduce the insensitive loss function (ε) and establish the ε-SVR prediction model of ‘wartime influencing factors – battle damage-oriented spare parts demand’. Finally, we implement a case study of forecasting three representative kinds of spare parts for assault of a combined army element, and thus verify feasibility and effectiveness of the model. We find that the proposed method can provide decision-making references for wartime spare parts supply with higher accuracy and more advantages in contrast with other current methods.  相似文献   

17.
This paper introduces a compact form for the maximum value of the non-Archimedean in Data Envelopment Analysis (DEA) models applied for the technology selection, without the need to solve a linear programming (LP). Using this method the computational performance the common weight multi-criteria decision-making (MCDM) DEA model proposed by Karsak and Ahiska (International Journal of Production Research, 2005, 43(8), 1537–1554) is improved. This improvement is significant when computational issues and complexity analysis are a concern.  相似文献   

18.
System design is a complex task when design parameters have to satisy a number of specifications and objectives which often conflict with those of others. This challenging problem is called multi-objective optimization (MOO). The most common approximation consists in optimizing a single cost index with a weighted sum of objectives. However, once weights are chosen the solution does not guarantee the best compromise among specifications, because there is an infinite number of solutions. A new approach can be stated, based on the designer's experience regarding the required specifications and the associated problems. This valuable information can be translated into preferences for design objectives, and will lead the search process to the best solution in terms of these preferences. This article presents a new method, which enumerates these a priori objective preferences. As a result, a single objective is built automatically and no weight selection need be performed. Problems occuring because of the multimodal nature of the generated single cost index are managed with genetic algorithms (GAs).  相似文献   

19.
Abstract:

This article extends the new product development (NPD) literature by presenting a case study of a lean product development (LPD) transformation framework implemented at a U.S. based manufacturing firm. In a departure from typical LPD methods, in this article the design structure matrix and the cause and effect matrix are integrated into the lean transformation framework, allowing analysis of the underlying complexity of a product development (PD) system, and thus facilitating determination of the root causes of wasteful reworks. Several strategies to transform the current PD process into a lean process are discussed. Besides the two-phase improvement plan, a new organizational structure roadmap and a human resources plan are also suggested to support the recommended changes in the NPD process. The results of the first phase show a 32% reduction in PD cycle time due to the proposed NPD process. The article concludes with lessons learned and implications for engineering managers based on the case study.  相似文献   

20.
The momentum distributionn(p) of atoms in condensed matter may be determined from neutron inelastic scattering measurements. However, scattering at high momentum transfers Q (high enough that the impulse approximation holds) is required. A model of solid helium is used to assess the error introduced inn(p) by assuming the impulse approximation holds at severalQ values. AtQ=20 Å–1 the error inn(p) is 5–10%. This error can be reduced by a factor of ten using a symmetrization procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号