首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The process capability index Cpk has been widely used as a process performance measure. In practice this index is estimated using sample data. Hence it is of great interest to obtain confidence limits for the actual index given a sample estimate. In this paper we depict graphically the relationship between process potential index (Cp), process shift index (k) and percentage non-conforming (p). Based on the monotone properties of the relationship, we derive two-sided confidence limits for k and Cpk under two different scenarios. These two limits are combined using the Bonferroni inequality to generate a third type of confidence limit. The performance of these limits of Cpk in terms of their coverage probability and average width is evaluated by simulation. The most suitable type of confidence limit for each specific range of k is then determined. The usage of these confidence limits is illustrated via examples. Finally a performance comparison is done between the proposed confidence limits and three non-parametric bootstrap confidence limits. The results show that the proposed method consistently gives the smallest width and yet provides the intended coverage probability. © 1997 John Wiley & Sons, Ltd.  相似文献   

2.
Process capability indices (PCIs) have been widely used in the manufacturing industry providing numerical measures on process precision, accuracy and performance. Capability indices measures for processes with a single characteristic have been investigated extensively. However, an industrial product may have more than one quality characteristic. In order to establish performance measures for evaluating the capability of a multivariate manufacturing process, multivariate PCIs should be introduced. In this paper, we analyze the relationship between PCI and process yield. The PCI ECpk is proposed based on the idea of six sigma strategy, and there is a one‐to‐one relationship between ECpk index and process yield. Following the same, idea we propose a PCI MECpk to measure processes with multiple characteristics. MECpk index can evaluate the overall process yield of both one‐sided and two‐sided processes. We also analyze the effect of covariance matrix on overall process yield and suggest a solution for improving overall process yield. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
To reduce the pollution caused by the carbon emissions of automobiles and locomotive vehicles, many countries around the world have encouraged citizens to use bicycles for short-distance trips in recent years. Bicycles are comprised of many parts, and quick-release hubs are highly important for the fixtures on the front and rear wheel axles of a bicycle. The quick-release hub is a multi-quality characteristic product, including two larger-the-better quality characteristics (quick-release stroke and tensile strength) and three nominal-the-best quality characteristics (axis size, assembly distance, and the outer diameter of the spindle). To improve the quality of quick-release hubs, this study proposes a multi-quality characteristic analysis table (MQCAT) and a multi-quality characteristic analysis model (MQCAM). The proposed method can provide a valuable reference by which to guide efforts aimed at improvement for quick-release hub manufacturers. A quick-release hub manufacturer in central Taiwan is presented to illustrate the feasibility of the proposed method. In addition, a comparison with recent methods is provided to demonstrate the advantages of the proposed method. Finally, conclusions are made based on the research study findings.  相似文献   

4.
基于六西格玛项目选择原理的BPR核心流程确定方法   总被引:3,自引:0,他引:3  
在分析企业核心竞争力与业务流程再造关系的基础上,提出了基于核心竞争力的业务流程再造模式.在这个模式中运用六西格玛项目选择原理,建立了核心竞争力的分散要素与业务流程环节之间的相关矩阵,对影响核心竞争力的业务流程环节进行综合评价,并进行优先排序,然后企业根据实际条件选择出要再造的核心流程.以实际企业为案例验证了此方法.  相似文献   

5.
During the recent years, the major driving forces of customer preference, vehicle safety, environmental protection, and flat demand for new cars led to a proliferation of car models and associated price reduction in the auto supply chain. To react to the complex and competitive environment, auto suppliers are urgent to conduct continual improvement (CI) in an effective and systematic way. Thus, this paper presents a framework for CI. The Integrated Development System is proposed to integrate both capability maturity model integration and six‐sigma approach. By implementing the framework, an automaker can establish a solid process‐based management system, identify critical processes, and optimize these processes. For demonstrating the application of this framework, a case study is presented and the result shows that all expected processes' performance targets are achieved, which has dramatically improved by 70% compared with the past records. Apparently, the proposed framework would act as a useful reference to improve an organization's process maturity. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
In References 1 and 2 we showed that the error in the finite-element solution has two parts, the local error and the pollution error, and we studied the effect of the pollution error on the quality of the local error-indicators and the quality of the derivatives recovered by local post-processing. Here we show that it is possible to construct a posteriori estimates of the pollution error in any patch of elements by employing the local error-indicators over the mesh outside the patch. We also give an algorithm for the adaptive control of the pollution error in any patch of elements of interest.  相似文献   

7.
精益六西格玛(Lean Six Sigma, LSS)作为持续改进与创新的有力工具,在业界(制造业、建筑业、服务业甚至非盈利性组织)得到广泛的应用,但学术界和产业界对LSS的认识和理解仍存在一些差异。为了构建对LSS更为全面的认识,基于Web of Science 数据库中题目含有“六西格玛”或“精益六西格玛”的2701篇文献分析,简要回顾LSS的起源,系统梳理LSS在理论研究和产业界应用的现状,给出6个主要的研究内容。结合LSS在中国的应用实践,并从战略、系统和集成3个角度展望LSS的未来发展趋势,期望对精益六西格玛的学术研究和产业应用提供指导和参考。  相似文献   

8.
The present work deals with an a posteriori error estimator for linear finite element analysis, based on a stress recovery procedure called Recovery by Compatibility in Patches. The key idea of this procedure is to recover improved stresses by minimizing the complementary energy over patches of elements. Displacements computed by the finite element analysis are prescribed on the boundary of the patch. Here, a new form of this recovery procedure is presented. Adopting a different patch configuration, centred upon an element instead of a node, allows to drastically simplify the recovery process, thus improving efficiency and making the implementation in finite element codes much easier. The robustness tests demonstrate that the error estimator associated to the new form of the recovery procedure retains the very good properties of the original one, such as superconvergence. The numerical results on two common benchmark problems confirm the effectiveness of the proposed error estimator, which appears to be competitive with those currently available. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
In part I of this investigation, we proved that the standard a posteriori estimates, based only on local computations, may severely underestimate the exact error for the classes of wave-numbers and the types of meshes employed in engineering analyses. We showed that this is due to the fact that the local estimators do not measure the pollution effect inherent to the FE-solutions of Helmholtz' equation with large wavenumber. Here, we construct a posteriori estimates of the pollution error. We demonstrate that these estimates are reliable and can be used to correct the standard a posteriori error estimates in any patch of elements of interest. © 1997 John Wiley & Sons, Ltd.  相似文献   

10.
Given a scalar, stationary, Markov process, this short communication presents a closed-form solution for the first-passage problem for a fixed threshold b. The derivation is based on binary processes and the general formula of Siegert [Siegert AJF. On the first-passage time probability problem. Physical Review 1951; 81:617–23]. The relation for the probability density function of the first-passage time is identical to the commonly used formula that was derived by VanMarcke [VanMarcke E. On the distribution of the first-passage time for normal stationary random processes. Journal of Applied Mechanics ASME 1975; 42:215–20] for Gaussian processes. The present derivation is based on more general conditions and reveals the criteria for the validity of the approximation. Properties of binary processes are also used to derive a hierarchy of upper bounds for any scalar process.  相似文献   

11.
Diffuse Optical Tomography (DOT) is a non-invasive imaging technique that suffers from a typical large-scale and ill-posed inverse problem with low spatial resolution. In DOT, the inverse problem is computationally intensive and decreasing the computation complexity and making it well-posed is the one of the most challenging research areas. More precisely, one of the well-known complexity reduction techniques is defined as applying modelling error originated from discretization of forward problem. Applying the discretization error in Bayesian inference has already been discussed; the method in which the likelihood is modified by an off-line prior density estimation. This paper implements a new method to enhance the modelling error approach using an iterative scheme to update statistical parameters of modelling discrepancy in DOT. The algorithm is very similar to Ensemble Kalman Filter. Moreover, the reconstruction process in the applied method is conducted by a small sample size rather than off-line method. Hence, the computation complexity is decreased and the algorithm converges in few iterations. The efficiency of the proposed method is illustrated by simulations.  相似文献   

12.
Numerical model reduction is adopted for solving the microscale problem that arizes from computational homogenization of a model problem of porous media with displacement and pressure as unknown fields. A reduced basis is obtained for the pressure field using (i) spectral decomposition (SD) and (ii) proper orthogonal decomposition (POD). This strategy has been used in previous work—the main contribution of this article is the extension with an a posteriori estimator for assessing the error in (i) energy norm and in (ii) a given quantity of interest. The error estimator builds on previous work by the authors; the novelty presented in this article is the generalization of the estimator to a coupled problem, and, more importantly, to accommodate the estimator for a POD basis rather than the SD basis. Guaranteed, fully computable and low-cost bounds are derived and the performance of the error estimates is demonstrated via numerical results.  相似文献   

13.
The number of studies about control charts proposed to monitor profiles, where the quality of a process/product is expressed as function of response and explanatory variable(s), has been increasing in recent years. However, most authors assume that the in‐control parameter values are known in phase II analysis and the error terms are normally distributed. These assumptions are rarely satisfied in practice. In this study, the performance of EWMA‐R, EWMA‐3, and EWMA‐3(d2) methods for monitoring simple linear profiles is examined via simulation where the in‐control parameters are estimated and innovations have a Student's t distribution or gamma distribution. Instead of the average run length (ARL) and the standard deviation of run length, we used average and standard deviation of the ARL as performance measures in order to capture the sampling variation among different practitioners. It is seen that the estimation effect becomes more severe when the number of phase I profiles used in estimation decreases, as expected, and as the distribution deviates from normality to a greater extent. Besides, although the average ARL values get closer to the desired values as the amount of phase I data increases, their standard deviations remain far away from the acceptable level indicating a high practitioner‐to‐practitioner variability.  相似文献   

14.
Process capability indices have been widely used in the manufacturing industry. While most studies consider estimation of capability indices for normal processes, comparatively little is known about their behavior in non‐normal settings. Greenwich and Jahr‐Schaffrath (Int. J. Qual. Reliab. Manage. 1995; 12:58–71) introduced the incapability index Cpp to evaluate processes. In this paper, we explore the interval estimation of the incapability index Cpp for non‐normally distributed processes by utilizing seven feasible methods. We further develop an efficient criterion, which is relative coverage, to evaluate the performance of the seven methods. Detailed discussion of simulation results for six non‐normally distributed processes is presented. The results display that the bootstrap pivotal method developed by Wasserman (All of Statistics: A Concise Course in Statistical Inference. Springer Science, Business Media, Inc., 2004) is the best feasible method to estimate Cpp. An example is also demonstrated to illustrate how the method may be used in practice. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
The a posteriori error estimation in constitutive law has already been extensively developed and applied to finite element solutions of structural analysis problems. The paper presents an extension of this estimator to problems governed by the Helmholtz equation (e.g. acoustic problems) that we have already partially reported, this paper containing informations about the construction of the admissible fields for acoustics. Moreover, it has been proven that the upper bound property of this estimator applied to elasticity problems (the error in constitutive law bounds from above the exact error in energy norm) does not generally apply to acoustic formulations due to the presence of the specific pollution error. The numerical investigations of the present paper confirm that the upper bound property of this type of estimator is verified only in the case of low (non‐dimensional) wave numbers while it is violated for high wave numbers due to the pollution effect. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

16.
李映辉 《工业计量》2004,14(1):45-46
文章以可见分光光度计主标准装置之一的干涉滤光片对上海第三分析仪器厂生产的721型分光光度计检定为例,进行波长示值误差测量不确定度评定。  相似文献   

17.
18.
Virtually all manufacturing processes are subject to variability, an inherent characteristic of most production processes. No two parts can ever be exactly the same in terms of their dimensions. For machining processes such as drilling, milling, and lathing, overall variability is caused in part by machine tools, tooling, fixtures and workpiece material. Since variability, which can be accumulated from tolerance stacking, can result in defective parts the number of parts produced in a batch is limited. When there are too many parts in a batch, the likelihood of producing all acceptable parts in a batch decreases due to the increased tolerances. On the other hand, too small a batch size incurs an increase of manufacturing costs due to frequent setups and tool replacements, whereas the likelihood of acceptable parts increases. To address this challenge, we present a stochastic model for determining the optimal batch size where we consider part-to-part variation in terms of tool wear, which tends to be proportional to batch size. In this paper, a mathematical model is constructed based on the assumption that the process used for producing preceding parts affects the state of subsequent parts in a probabilistic manner.  相似文献   

19.
During the last decade, significant scientific efforts were made in the area of quality assurance of numerical results obtained by means of the finite element method (FEM). These efforts were based on adaptive remeshing controlled by an estimated error. This paper reports on the extension of error estimation to non‐linear shell analysis involving strain‐hardening and softening plasticity. In the context of incremental‐iterative analyses, an incremental error estimator is introduced. It is based on the rate of work. The stress recovery technique proposed by Zienkiewicz and Zhu (Int. J. Numer. Meth. Engng 1992; 33 :1331) is modified to allow for discontinuities of certain stress components in case of localization arising from, e.g. cracking of concrete. The developed error estimator is part of a calculation scheme for adaptive non‐linear FE analysis. If the estimated error exceeds a prespecified threshold value in the course of an adaptive analysis, a new mesh is generated. After mesh refinement the state variables are transferred from the old to the new mesh and the calculation is restarted at the load level which was attained with the old mesh. The performance of the proposed error estimator is demonstrated by means of adaptive calculations of a reinforced concrete (RC) cooling tower. The influence of the user‐prescribed error threshold on the numerical results is investigated. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
针对目前半闭环车床加工内球面精度低、误差大的现状,研究进给系统刚度对加工误差的影响并提高内球面的加工精度.利用赫兹接触理论对进给系统刚度进行分析,建立了刚度的数学模型.分析了半闭环车床加工内球面的工艺过程,并推导了切削曲面过程中进给系统受力状况,给出了误差表达式.通过对加工误差模型的仿真研究,结果表明进给系统刚度所产生的加工误差与刀位点和母线圆弧形状有关,所产生的误差可通过增发运动脉冲补偿.提出了一种引进补偿机制的插补运算方法,为提高内球面加工精度提供了理论上的指导.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号