首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Although in the statistical process control (SPC) literature, there are considerable number of researches related to the multivariates variables control charting (focusing on the variable quality characteristics), fewer investigations could be found regarding the multivariate attributes control charts (relying on the attribute quality characteristics). More specifically considering the multivariate attributes control charting, it would be more interesting to monitor the auto‐correlated data, since the real‐world processes usually include the data based on an auto‐correlation structure. Ignoring the auto‐correlation structure in developing a multivariate control chart increases the type I and type II errors simultaneously and consequently reduces the performance of the chart. The most important difficulty with developing multivariate attributes control charts is the absence of the joint distribution for the quality characteristics. This deficiency can be dispelled through the use of the copula approach for developing the joint distribution. In this paper, we use the Markov approach for modeling the auto‐correlated data. Then, the copula approach is used to make the joint distribution of two auto‐correlated binary data series. Finally, based on this joint distribution, we develop a cumulative sum (CUSUM) chart. Hence, the proposed chart is entitled the copula Markov CUSUM chart. The proposed control chart is compared with the most recent and effective existing one in the literature. Based on the average number of observations to signal (ANOS) measure, it is considered that the developed control chart performs better than the other one. In addition, a real case study related to two correlated diseases such as the Type 2 Diabetes Mellitus and the Obesity, in which each has an auto‐correlated structure, is investigated to verify the applicability of the control chart. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Zero‐inflated probability models are used to model count data that have an excessive number of zeros. These models are mostly useful in modeling high‐yield or health‐related processes. The zero‐inflated binomial distribution is an extension of the ordinary binomial distribution that takes into account the excess of zeros. In this paper, one‐sided cumulative sum (CUSUM)‐type control charts are proposed for monitoring increases or decreases in the parameter p of a zero‐inflated binomial process. The results of an extensive numerical study concerning the statistical design of the proposed schemes as well as their practical implementation are provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Panagiotis Isigonis  Antreas Afantitis  Dalila Antunes  Alena Bartonova  Ali Beitollahi  Nils Bohmer  Evert Bouman  Qasim Chaudhry  Mihaela Roxana Cimpan  Emil Cimpan  Shareen Doak  Damien Dupin  Doreen Fedrigo  Valrie Fessard  Maciej Gromelski  Arno C. Gutleb  Sabina Halappanavar  Peter Hoet  Nina Jeliazkova  Stphane Jomini  Sabine Lindner  Igor Linkov  Eleonora Marta Longhin  Iseult Lynch  Ineke Malsch  Antonio Marcomini  Espen Mariussen  Jesus M. de la Fuente  Georgia Melagraki  Finbarr Murphy  Michael Neaves  Rolf Packroff  Stefan Pfuhler  Tomasz Puzyn  Qamar Rahman  Elise Rundn Pran  Elena Semenzin  Tommaso Serchi  Christoph Steinbach  Benjamin Trump  Ivana Vinkovi&#x; Vr ek  David Warheit  Mark R. Wiesner  Egon Willighagen  Maria Dusinska 《Small (Weinheim an der Bergstrasse, Germany)》2020,16(36)
Nanotechnologies have reached maturity and market penetration that require nano‐specific changes in legislation and harmonization among legislation domains, such as the amendments to REACH for nanomaterials (NMs) which came into force in 2020. Thus, an assessment of the components and regulatory boundaries of NMs risk governance is timely, alongside related methods and tools, as part of the global efforts to optimise nanosafety and integrate it into product design processes, via Safe(r)‐by‐Design (SbD) concepts. This paper provides an overview of the state‐of‐the‐art regarding risk governance of NMs and lays out the theoretical basis for the development and implementation of an effective, trustworthy and transparent risk governance framework for NMs. The proposed framework enables continuous integration of the evolving state of the science, leverages best practice from contiguous disciplines and facilitates responsive re‐thinking of nanosafety governance to meet future needs. To achieve and operationalise such framework, a science‐based Risk Governance Council (RGC) for NMs is being developed. The framework will provide a toolkit for independent NMs' risk governance and integrates needs and views of stakeholders. An extension of this framework to relevant advanced materials and emerging technologies is also envisaged, in view of future foundations of risk research in Europe and globally.  相似文献   

4.
High‐dimensional applications pose a significant challenge to the capability of conventional statistical process control techniques in detecting abnormal changes in process parameters. These techniques fail to recognize out‐of‐control signals and locate the root causes of faults especially when small shifts occur in high‐dimensional variables under the sparsity assumption of process mean changes. In this paper, we propose a variable selection‐based multivariate cumulative sum (VS‐MCUSUM) chart for enhancing sensitivity to out‐of‐control conditions in high‐dimensional processes. While other existing charts with variable selection techniques tend to show weak performances in detecting small shifts in process parameters due to the misidentification of the ‘faulty’ parameters, the proposed chart performs well for small process shifts in identifying the parameters. The performance of the VS‐MCUSUM chart under different combinations of design parameters is compared with the conventional MCUSUM and the VS‐multivariate exponentially weighted moving average control charts. Finally, a case study is presented as a real‐life example to illustrate the operational procedures of the proposed chart. Both the simulation and numerical studies show the superior performance of the proposed chart in detecting mean shift in multivariate processes. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Implementing an enterprise resource planning system is a sophisticated, lengthy, and costly process which tends to face serious failure. Thus, it is essential to perform a success assessment at the post-implementation stage of an ERP project to evaluate how much the system has succeeded in achieving its predetermined objectives. This paper proposes a practical framework for assessing a firm's ERP post-implementation success utilising current models through a fuzzy analytic network process. The construct of ERP success is broken down into three main parts, including managerial success, organisational success, and individual success. Using this framework, the firm's ERP system success can be determined and the required improvement projects can be proposed to promote the success level. The proposed framework has been applied to a real international company, in the field of manufacturing and supplying turbines, to measure the firm's ERP post-implementation success. Finally, the advantages of the model are illustrated.  相似文献   

6.
Weibull distribution is one of the most important probability models used in modeling time between events, system reliability, and particle sizes, among others. Therefore, efficiently and consequently monitoring certain changes in Weibull process is considered as an important research topic. Various statistical process monitoring schemes have been developed for monitoring different process parameters, including some for Weibull parameters. Most of these schemes are, however, designed to monitor and control a single process parameter, although there are two important model parameters for Weibull distribution. Recently, several researchers studied various schemes for jointly monitoring the mean and variance of a normally distributed process using a single plotting statistic. Nevertheless, there is still dearth of researches in joint monitoring of non‐normal process parameters. In this context, we develop some control schemes for simultaneously monitoring the scale and shape parameters of processes that follow the Weibull distribution. Implementation procedures are developed, and performance properties of various proposed schemes are investigated. We also offer an illustrative example along with a summary and recommendations.  相似文献   

7.
A macroscopic framework for the simulation of physical degradation processes in quasi‐brittle porous materials is proposed. The framework employs the partition of unity (PU) concept and introduces a cohesive zone model, capturing the entire failure process starting from the growth and coalescence of micro‐defects until the formation of macro‐cracks. The framework incorporates the interaction between the failure process and the heat and mass transfer in the porous medium. As an example, physical degradation of an outside render is studied. The analysis illustrates that both material and interface failure can be investigated with this formulation. Depending on the boundary conditions, either one dominant crack or a network of small cracks is formed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
To evaluate the capability of manufacturing processes in satisfying the customer's needs, a variety of indices has been developed. Some of them are introduced by researchers to analyse the processes with multivariate quality characteristics. Most of the proposed in the literature multivariate capability indices are defined under assumption of normality distribution of the quality characteristics. Thus, the process region describing the variation of the data has an elliptical shape. In this paper, a multivariate process capability vector with three components is introduced, which allows to access the capability of a process with both normally and non‐normally quality characteristics due to application of a pair of one‐sided models as the process region shape. At the beginning, one‐sided models are defined, next the proposed vector components are proposed and the methodology of their evaluation is presented. The methodology (which in fact could be also applied to both the correlated and non‐correlated characteristics) is verified by applying simulation and real problems. The obtained results show that the proposed methodology performs satisfactorily in all considered cases. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
A computational framework is presented to evaluate the shape as well as non‐shape (parameter) sensitivity of finite thermo‐inelastic deformations using the continuum sensitivity method (CSM). Weak sensitivity equations are developed for the large thermo‐mechanical deformation of hyperelastic thermo‐viscoplastic materials that are consistent with the kinematic, constitutive, contact and thermal analyses used in the solution of the direct deformation problem. The sensitivities are defined in a rigorous sense and the sensitivity analysis is performed in an infinite‐dimensional continuum framework. The effects of perturbation in the preform, die surface, or other process parameters are carefully considered in the CSM development for the computation of the die temperature sensitivity fields. The direct deformation and sensitivity deformation problems are solved using the finite element method. The results of the continuum sensitivity analysis are validated extensively by a comparison with those obtained by finite difference approximations (i.e. using the solution of a deformation problem with perturbed design variables). The effectiveness of the method is demonstrated with a number of applications in the design optimization of metal forming processes. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, maximum likelihood step‐change point estimators of the location parameter, the out‐of‐control sample and the out‐of‐control stage are developed for auto‐correlated multistage processes. To do this, the multistage process and the concept of change detection are first discussed. Then, a time‐series model of the process is presented. Assuming step changes in the location parameter of the process, next, the likelihood functions of different samples before and after receiving out‐of‐control signal from an X‐bar control chart were derived under different conditions. The maximum likelihood estimators were then obtained by maximizing the likelihood functions. Finally, the accuracy and the precision of the proposed estimators are examined through some Monte Carlo simulation experiments. The results show the estimators to be promising. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
12.
Failure mode and effects analysis (FMEA) is a widely used risk management technique for identifying the potential failures from a system, design, or process and determining the most serious ones for risk reduction. Nonetheless, the traditional FMEA method has been criticized for having many deficiencies. Further, in the real world, FMEA team members are usually bounded rationality, and thus, their psychological behaviors should be considered. In response, this study presents a novel risk priority model for FMEA by using interval two‐tuple linguistic variables and an integrated multicriteria decision‐making (MCDM) method. The interval two‐tuple linguistic variables are used to capture FMEA team members' diverse assessments on the risk of failure modes and the weights of risk factors. An integrated MCDM method based on regret theory and TODIM (an acronym in Portuguese for interactive MCDM) is developed to prioritize failure modes taking experts' psychological behaviors into account. Finally, an illustrative example regarding medical product development is included to verify the feasibility and effectiveness of the proposed FMEA. By comparing with other existing methods, the proposed linguistic FMEA approach is shown to be more advantageous in ranking failure modes under the uncertain and complex environment.  相似文献   

13.
In this paper, we propose a new approach for computing 2D FFT's that are suitable for implementation on a systolic array architecture. Our algorithm is derived in this paper from a Cooley decimation‐in‐time algorithm by using an appropriate indexing process. It is proved that the number of multiplications necessary to compute our proposed algorithm is significantly reduced while the number of additions remains almost identical to that of conventional 2D FFT's. Comparison results show the good performance of the proposed 2D FFT algorithm against the row‐column FFT transform. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

14.
Knowing when a process has changed would simplify the search for and identification of the special cause. In this paper, we propose a maximum‐likelihood estimator for the change point of the process fraction non‐conforming without requiring knowledge of the exact change type a priori. Instead, we assume the type of change present belongs to a family of monotonic changes. We compare the proposed change‐point estimator to the maximum‐likelihood estimator for the process change point derived under a simple step change assumption. We do this for a number of monotonic change types and following a signal from a binomial cumulative sum (CUSUM) control chart. We conclude that it is better to use the proposed change point estimator when the type of change present is only known to be monotonic. The results show that the proposed estimator provides process engineers with an accurate and useful estimate of the time of the process change regardless of the type of monotonic change that may be present. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
Major difficulties in the study of high‐quality processes with traditional process monitoring techniques are a high false alarm rate and a negative lower control limit. The purpose of time‐between‐events control charts is to overcome existing problems in the high‐quality process monitoring setup. Time‐between‐events charts detect an out‐of‐control situation without great loss of sensitivity as compared with existing charts. High‐quality control charts gained much attention over the last decade because of the technological revolution. This article is dedicated to providing an overview of recent research and presenting it in a unifying framework. To summarize results and draw a precise conclusion from the statistical point of view, cross‐tabulations are also given in this article. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Reliability of an engineering system can be improved by investing on redundant (spare) parts. However, the cost‐efficiency of such an investment is a significant concern that needs to be taken into consideration in practice. To do so, a continuous‐time Markov chain (CTMC) model is presented in this paper to analyze the system's reliability by allocating redundant components. The developed model can also capture the system's repair and failure conditions by defining appropriate states in CTMC. Subsequently, the net present value (NPV) approach is utilized for a variety of scenarios to investigate the effectiveness of investment on spare parts using the break‐even point (BEP) analysis. Afterwards, a comprehensive analysis is carried out to examine the impact of input parameters including interest rate, initial cost of investment, and periodic profit on the decision making process to find the optimal number of spare parts.  相似文献   

17.
Control charts are the most popular statistical process control tools used to monitor process changes. When a control chart indicates an out‐of‐control signal it means that the process has changed. However, control chart signals do not indicate the real time of process changes, which is essential for identifying and removing assignable causes and ultimately improving the process. Identifying the real time of the change is known as the change‐point estimation problem. Most of the traditional methods of estimating the process change point are developed based on the assumption that the process follows a normal distribution with known parameters, which is seldom true. In this paper, we propose clustering techniques to estimate Shewhart control chart change points. The proposed approach does not depend on the true values of the parameters and even the distribution of the process variables. Accordingly, it is applicable to both phase‐I and phase‐II of normal and non‐normal processes. At the end, we discuss the performance of the proposed method in comparison with the traditional procedures through extensive simulation studies. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
Because of the characteristics of a system or process, several prespecified changes may happen in some statistical process control applications. Thus, one possible and challenging problem in profile monitoring is detecting changes away from the ‘normal’ profile toward one of several prespecified ‘bad’ profiles. In this article, to monitor the prespecified changes in linear profiles, two two‐sided cumulative sum (CUSUM) schemes are proposed based on Student's t‐statistic, which use two separate statistics and a single statistic, respectively. Simulation results show that the CUSUM scheme with a single statistic uniformly outperforms that with two separate statistics. Besides, both CUSUM schemes perform better than alternative methods in detecting small shifts in prespecified changes, and become comparable on detecting moderate or large shifts when the number of observations in each profile is large. To overcome the weakness in the proposed CUSUM methods, two modified CUSUM schemes are developed using z‐statistic and studied when the in‐control parameters are estimated. Simulation results indicate that the modified CUSUM chart with a single charting statistic slightly outperforms that with two separate statistics in terms of the average run length and its standard deviation. Finally, illustrative examples indicate that the CUSUM schemes are effective. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
A novel component analysis model is proposed to identify the mixed process signals which are frequently encountered in the statistical process control (SPC) and engineering process control (EPC) practice. Based upon one of existing state-of-the-art evolutionary algorithms, called particle swarm optimization (PSO), the proposed model provides a solution (i.e., demixing matrix) by maximizing the determinant of the corresponding second-order moment (variance–covariance) matrix of the reconstructed signals. Then, the estimated demixing matrix is used to separate mixed signals arising from several original process signals. The process signals considered in this paper include inconsistent variance series, autoregressive (AR) series, step change, and Gaussian noises in the process data. In practice, most of industrial manufacturing processes can be well characterized by a mixture of these four types of data. By following the proposed model, the blind signal separation framework can be cast into a nonlinear constrained optimization problem, where only the demixing matrix appears as unknown. Several illustrative examples involving linear mixtures of the process signals with different statistical characteristics are demonstrated to justify the new component analysis model.  相似文献   

20.
A framework combining artificial neural network (ANN) modelling technique, data mining and ant colony optimisation (ACO) algorithm is proposed for determining multiple-input multiple-output (MIMO) process parameters from the initial chemical-mechanical planarisation (CMP) processes used in semiconductor manufacturing. Owing to the invisibility of the ANN in the solution procedures, the decision tree approach of data mining is adopted to provide the necessary information for a real-valued ACO. The simulation result demonstrates that the proposed method can be an efficient tool for selecting properly defined parameter combination with the CMP process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号