首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 731 毫秒
1.
Quality control plays an important part in most industrial systems. Its role in providing relevant and timely data to management for decision‐making purposes is vital. A method that uses statistical techniques to monitor and control product quality is called statistical process control (SPC), where control charts are test tools frequently used for monitoring the manufacturing process. Engineers or managers can evaluate an abnormal process by using SPC zone rules in control charts. In the conventional use of the zone rules the user is only able to determine whether or not the process is out of control. What action should be taken to adjust the process is uncertain and is evaluated based on knowledge of the system and past experiences. This paper explores the integration of fuzzy logic and control charts to create and design a fuzzy–SPC evaluation and control (FSEC) method based on the application of fuzzy logic to the SPC zone rules. A simulation program implementing FSEC was written in Borland C++ 5.0 and simulation results were obtained and analysed. The abnormal processes simulated were automatically adjusted for each of the zone rules tested and showed an improved performance after the control action, thus confirming the merit of the technique as a special method with the specific numerical control action based on a quality evaluation criterion. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
The inclusion of correlated auxiliary variables into the monitoring scheme of quality characteristic of interest has gained notable attention in recent statistical process control (SPC) literature. Several authors have investigated the use of a correlated auxiliary variable for efficient monitoring of variability in Phase II of SPC. This phase is generally used to detect any shifts in the expected behavior of the process parameters which are often estimated from the historical in-control process in Phase I. However, no study has investigated the performance of auxiliary-based variability charts in Phase I of SPC. Here, we propose auxiliary-based dispersion control charts in Phase I of SPC. The auxiliary information is considered in the forms of regression, ratio, exponential, and power-ratio forms. The performance of the variability charts is evaluated and compared using probability to signal as a performance measure. An illustrative example is also provided to show the application of the charts. This study will provide practitioners with appropriate recommendations on the choice of dispersion charts for Phase I analysis.  相似文献   

3.
Statistical process control charts have been successfully used to monitor process stability in various industries. The need to simultaneously monitor two or more quality characteristics has led to the prevalent adoption of multivariate control charts. However, out-of-control signals in multivariate control charts may be caused by one or more variables, or a set of variables. Therefore, effective quality control requires not only the rapid detection of process fluctuations, but also the correct identification of the variable(s) responsible for those changes. This study approaches the diagnosis of out-of-control signals as a classification task and proposes a support vector machine (SVM)-based ensemble classification model focused on variance shifts in multivariate processes. We address the issues of data diversity and ensemble method in constructing an ensemble model. Simulation results demonstrate the effectiveness of the proposed ensemble classification model in identifying the source of variance change. The proposed method clearly outperforms single classifiers as well as other comparable models including bagging and boosting. The results also reveal that the use of extracted features as input vectors for SVM provides better classification performance than the use of raw data. The proposed SVM-based ensemble classification system provides a reliable tool for the interpretation of out-of-control signals in multivariate process control.  相似文献   

4.
VVS Sarma  D Vijay Rao 《Sadhana》1997,22(1):121-132
In today’s competitive environment for software products, quality is an important characteristic. The development of large-scale software products is a complex and expensive process. Testing plays a very important role in ensuring product quality. Improving the software development process leads to improved product quality. We propose a queueing model based on re-entrant lines to depict the process of software modules undergoing testing/debugging, inspections and code reviews, verification and validation, and quality assurance tests before being accepted for use. Using the re-entrant line model for software testing, bounds on test times are obtained by considering the state transitions for a general class of modules and solving a linear programming model. Scheduling of software modules for tests at each process step yields the constraints for the linear program. The methodology presented is applied to the development of a software system and bounds on test times are obtained. These bounds are used to allocate time for the testing phase of the project and to estimate the release times of software.  相似文献   

5.
Statistical process control charts are one of the most widely used techniques in industry and laboratories that allow monitoring of systems against faults. To control multivariate processes, most classical charts need to model process structure and assume that variables are linearly and independently distributed. This study proposes to use a nonparametric method named Support Vector Regression to construct several control charts that allow monitoring of multivariate nonlinear autocorrelated processes. Also although most statistical quality control techniques focused on detecting mean shifts, this research investigates detection of different parameter shifts. Based on simulation results, the study shows that, with a controlled robustness, the charts are able to detect the different applied disturbances. Moreover in comparison to Artificial Neural Networks control chart, the proposed charts are especially more effective in detecting faults affecting the process variance.  相似文献   

6.
BACKGROUND: Explicit chart review was an integral part of an ongoing national cooperative project, "Using Achievable Benchmarks of Care to Improve Quality of Care for Outpatients with Depression," conducted by a large managed care organization (MCO) and an academic medical center. Many investigators overlook the complexities involved in obtaining high-quality data. Given a scarcity of advice in the quality improvement (QI) literature on how to conduct chart review, the process of chart review was examined and specific techniques for improving data quality were proposed. METHODS: The abstraction tool was developed and tested in a prepilot phase; perhaps the greatest problem detected was abstractor assumption and interpretation. The need for a clear distinction between symptoms of depression or anxiety and physician diagnosis of major depression or anxiety disorder also became apparent. In designing the variables for the chart review module, four key aspects were considered: classification, format, definition, and presentation. For example, issues in format include use of free-text versus numeric variables, categoric variables, and medication variables (which can be especially challenging for abstraction projects). Quantitative measures of reliability and validity were used to improve and maintain the quality of chart review data. Measuring reliability and validity offers assistance with development of the chart review tool, continuous maintenance of data quality throughout the production phase of chart review, and final documentation of data quality. For projects that require ongoing abstraction of large numbers of clinical records, data quality may be monitored with control charts and the principles of statistical process control. RESULTS: The chart review module, which contained 140 variables, was built using MedQuest software, a suite of tools designed for customized data collection. The overall interrater reliability increased from 80% in the prepilot phase to greater than 96% in the final phase (which included three abstractors and 465 unique charts). The mean time per chart was calculated for each abstractor, and the maximum value was 13.7 +/- 13 minutes. CONCLUSIONS: In general, chart review is more difficult than it appears on the surface. It is also project specific, making a "cookbook" approach difficult. Many factors, such as imprecisely worded research questions, vague specification of variables, poorly designed abstraction tools, inappropriate interpretation by abstractors, and poor or missing recording of data in the chart, may compromise data quality.  相似文献   

7.
Many software reliability growth models (SRGMs) based on a non-homogenous Poisson process (NHPP) have been developed with the assumption of a constant fault detection rate (FDR) and a fault detection process dependent only on the residual fault content. In this paper we develop a SRGM based on NHPP using a different approach for model development. Here, the fault detection process is dependent not only on the residual fault content, but also on the testing time. It incorporates a realistic situation encountered in software development where the fault detection rate is not constant over the entire testing process, but changes due to variations in resource allocation, defect density, running environment and testing strategy (called the change-point). Here, the FDR is defined as a function of testing time. The proposed model also incorporates the testing effort with the change-point concept which is useful in solving the problems of runaway software projects and provides the testing effort control technique and flexibility to project managers to obtain the desired reliability level. It utilizes failure data collected from software development projects to show its applicability and effectiveness. The statistical package for social sciences (SPSS) based on the least-squares method has been used to estimate unknown parameters. The mean squared error (MSE), relative predictive error (RPE), average mean squared error (AMSE) and the average relative predictive error (ARPE) have been used to validate the model. It is observed that the proposed model results are accurate, highly predictive and incorporate industrial software project concepts.  相似文献   

8.
The majority of available microcomputer packages for statistical process control (SPC) are off-line programs which present information regarding quality in the form of control charts. The user has to interpret the charts to infer process and product quality. This paper describes XPC, an on-line expert system for SPC. The system produces mean and range charts and interprets them automatically. XPC consists of five main modules. The first module ascertains process parameters and constructs the charts. The second module performs capability analysis to ensure that these control charts are compatible with the process specifications. The third module interprets on-line data, detects possible out-of-control situations and suggests corrective actions. The fourth module updates the charts to improve process capability. The last module produces periodical reports. XPC is based on Leonardo, an expert system shell with a hybrid knowledge representation facility enabling the use of rules, rulesets, frames, procedures and classes  相似文献   

9.
This paper presents an application of the sample autocorrelation function to statistical process control where the process data are serially correlated. Two innovative control charts are illustrated: the sample autocorrelation control chart and the group autocorrelation control chart. The important feature is that these control charts will detect shifts in the autocorrelative structure as well as shifts in the mean of the process. The sample autocorrelation function is typically used to identify an appropriate ARIMA model for a time series. The sample autocorrelation function may also be used as the basis of control charts to detect process upsets. Two unique features distingush this application of the sample autocorrelation function to statistical process control. First, the sample autocorrelations are exponentially smoothed estimates. This allows the user to control the sensitivity of the sample autocorrelation control chart. Secondly, the sample autocorrelation control chart is applied to a continuous stream of data—rather than to a static set of data that has been used to fit an ARIMA model.  相似文献   

10.
In this paper we discuss the basic procedures for the implementation of multivariate statistical process control via control charting. Furthermore, we review multivariate extensions for all kinds of univariate control charts, such as multivariate Shewhart‐type control charts, multivariate CUSUM control charts and multivariate EWMA control charts. In addition, we review unique procedures for the construction of multivariate control charts, based on multivariate statistical techniques such as principal components analysis (PCA) and partial least squares (PLS). Finally, we describe the most significant methods for the interpretation of an out‐of‐control signal. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
With the development of modern acquisition techniques, data with several correlated quality characteristics are increasingly accessible. Thus, multivariate control charts can be employed to detect changes in the process. This study proposes two multivariate control charts for monitoring process variability (MPVC) using a progressive approach. First, when the process parameters are known, the performance of the MPVC charts is compared with some multivariate dispersion schemes. The results showed that the proposed MPVC charts outperform their counterparts irrespective of the shifts in the process dispersion. The effects of the Phase I estimated covariance matrix on the efficiency of the MPVC charts were also evaluated. The performances of the proposed methods and their counterparts are evaluated by calculating some useful run length properties. An application of the proposed chart is also considered for the monitoring of a carbon fiber tubing process.  相似文献   

12.
Statistical process control methods are usually applied in an environment when periodic sampling and rational subgrouping of process output is appropriate. The resulting summary statistics can be graphically displayed and analysed using either traditional Shewhart control charts or other charts such as those based on the cumulative sum. This article presents an alternative approach, based on time series analysis of all the real-time process data. The time series approach is employed because the sequence of process observations may not be statistically independent. The autocorrelative structure in the data may be captured using an ARIMA model, and the residuals from this model are shown to be an effective input signal for a variety of statistical process control procedures.  相似文献   

13.
Multivariate statistical process control (MSPC) based for example on principal component analysis (PCA) can make use of the information contained in multiple measured signals simultaneously. This can be much more powerful in detecting variations due to special causes than conventional single variable statistical process control (SPC). Furthermore, the PCA based SPC simplifies monitoring as it limits the number of control charts to typically two charts rather than one for each signal. However, the derived MSPC statistics may suffer from lack of sensitivity if only one or a few variables deviate in a given situation. In this paper we develop a new comprehensive control (COCO) chart procedure that considers both univariate statistics and multivariate statistics derived from PCA in a single plot that allows easy visualization of the combined data from a univariate and multivariate point of view. The method is exemplified using twenty analytical chromatographic peak areas obtained for purity analysis of a biopharmaceutical drug substance. The new control chart procedure detected two different types of faulty events in this study.  相似文献   

14.
With the growth of automation in manufacturing, process quality characteristics are being measured at higher rates and data is more likely to be auto-correlated. Traditional statistical process control (SPC) techniques of control charting are not applicable in many process industries because process parameters are highly auto-correlated. Several attempts such as some time series based control charts have been made in the previous years to extend traditional SPC techniques. However, these extensions pose some serious limitations for monitoring the process mean shifts. These charts require that a suitable model has been identified for the time series of process observations before residuals can be obtained. In this paper, a logistic regression (LR)-based process monitoring model is proposed for enhancing the monitoring of processes. It is capable of providing a comprehensible and quantitative assessment value for the current process state, which is achieved by the event occurrence probability calculation of LR. Based on these probability values over the time series, a novel chart: LRProb chart, is developed for monitoring and visualising process changes. The aim of this research is to analyse the performance of the LRProb chart under the assumption that only a small number of predictable abnormal patterns are available. To such aim, the performance of the LRProb chart is evaluated on two real-world industrial cases and simulated processes. Given the simplicity, visualisation and quantification of the proposed LRProb chart, this approach is proved from the experiments to be a feasible alternative for quality monitoring in the case of auto-correlated process data.  相似文献   

15.
In manufacturing industries, control charts are the promising statistical tools used for an efficient monitoring of processes. These charts enhance the product quality by timely signaling for special variations at any stage of the process. There are two common concerns in statistical process monitoring, location and variability of the quality characteristic of interest. Besides location parameter, the monitoring of process dispersion remained a matter of concern for researchers. The conventional simple random sampling (SRS) is a usual practice; however, ranked set sampling (RSS) schemes are very effective methods of choosing sample values. This study intends to design and investigate dispersion control charts under different RSS strategies for normal and non‐normal processes. We have considered RSS, median ranked set sampling (MRSS), and extreme ranked set sampling (ERSS) schemes to design dispersion control charts. The performance of the existing and the proposed control charts is evaluated in terms of relative efficiency and power for normal and a variety of non‐normal distributions. The comparative analysis revealed that the proposed structures outperform the existing charts. The application of the proposed procedures is also shown for a bottles filling process for an efficient and timely signaling of any special causes in the process.  相似文献   

16.
As competition for world markets becomes more intense and as greater demands are placed on human and natural resources, manufacturers face a formidable challenge — to produce cost competitive products of the highest possible quality. Nowadays the quality control (quality assurance) department can spend incredibly large amounts of time just collecting the test data they need in order to do their job. Collating and analysing the data is similarly very demanding in terms of how much human time can be expended. Simplification and automation of the tasks of data collection, collation and analysis is possible with a quality control information system. A microprocessor can form the basis of a quality control information system if application to only a very small area of a manufacturing operation is desired. However, far greater benefits can be derived if a real-time minicomputer is used instead of a microprocessor. This is because quality control data can then be collected across the entire factory floor. Statistical quality control techniques can subsequently be applied to examine the effect of process conditions in some part of the factor on product malquality exhibited in any other part of the factory. ‘What-if’ type analyses can also be conducted factory wide. A quality control information system should be capable of (a) identifying the product quality problems (b) determining the causes of these product quality problems (c) helping to eliminate the causes (d) monitoring the altered process. These system functions enable the quality control department to progress from a ‘fix-the-product’ mode to a ‘fix-the-process’ mode and thereby concentrate on making the product right in the first place, by achieving better control of the manufacturing process. Application software packages are now available which have been purpose-designed for quality control information systems. One such example is the Quality Decision Management (QDM) software package from Hewlett-Packard, which runs on an industrial HP 1000 real-time minicomputer. On the data entry side a quality control information system should be capable of manual input from menu-driven terminals and bar-code wands. Automatic input of test data is also vitally important. This enables such devices as analysers, test instruments, data acquisition units. ATE and other computers to be interfaced to gather test data with no manual intervention. All collected data should be held in a database. The QDM database can be searched to generate a wide range of reports in tabular or colour graphic form, the latter include histograms, scattergrams, control charts (X-bar, sigma), P-charts and Pareto diagrams. The reports enable crucial quality control decisions to be taken. Quality control information systems can be linked to higher level production control computers to form a computer integrated manufacturing (CIM) network. Free movement of data around a CIM network offers enormous flexibility and efficiency in the overall manufacturing process, together with considerable savings in manpower. Costs incurred in the creation of a quality control information system can be easily won back by virtue of the significant savings that such a system brings about.  相似文献   

17.
The concurrent use of statistical process control and engineering process control involves monitoring manipulated and controlled variables. One multivariate control chart may handle the statistical monitoring of all variables, but observing the manipulated and controlled variables in separate control charts may improve understanding of how disturbances and the controller performance affect the process. In this article, we illustrate how step and ramp disturbances manifest themselves in a single-input–single-output system by studying their resulting signatures in the controlled and manipulated variables. The system is controlled by variations of the widely used proportional-integral-derivative(PID) control scheme. Implications for applying control charts for these scenarios are discussed.  相似文献   

18.
In multivariate statistical process control (MSPC), most multivariate control charts can effectively monitor anomalies based on overall statistic, however, they cannot provide guidelines to classify the source(s) of out-of-control signals. Classifying the source(s) of process mean shifts is critical for quality control in multivariate manufacturing process since the immediate identification of them can greatly help quality engineer to narrow down the set of possible root causes and take corrective actions. This study presents an improved particle swarm optimisation with simulated annealing-based selective multiclass support vector machines ensemble (PS-SVME) approach, in which some selective multiclass SVMs are jointly used for classifying the source(s) of process mean shifts in multivariate control charts. The performance of the proposed PS-SVME approach is evaluated by computing its classification accuracy. Simulation experiments are conducted and a real application is illustrated to validate the effectiveness of the developed approach. The analysis results indicate that the developed PS-SVME approach can perform effectively for classifying the source(s) of process mean shifts.  相似文献   

19.
The Exponentially Weighted Moving Average (EWMA) control chart has mainly been used to monitor continuous data, usually under the normality assumption. In addition, a number of EWMA control charts have been proposed for Poisson data. Here, however, we suggest applying the EWMA to hypergeometric data originating from a multivariate Bernoulli process. The problem studied in this paper concerns the wear‐out of electronics testers resulting in unnecessary and costly reparations of electronic units. Assuming that the testing process is in statistical control, although the quality of the tested units is not, we can detect the wear‐out of a tester by finding assignable causes of variation in that tester. This reasoning forms the basis of a new EWMA procedure designed to detect shifts in a Bernoulli process in an out‐of‐control environment. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
Knowing when a process has changed would simplify the search for and identification of the special cause. Consequently, having an estimate of the process change point following a control chart signal would be useful to process engineers. Much of the literature on change point models and techniques for statistical process control applications consider processes well modelled by the normal distribution. However, the Poisson distribution is commonly used in industrial quality control applications for modelling attribute-based process quality characteristics (e.g., counts of non-conformities). Some commonly used control charts for monitoring Poisson distributed data are the Poisson cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) control charts. In this paper, we study the effect of changes in the design of the control chart on the performances of the change point estimators offered by these procedures. In particular, we compare root mean square error performances of the change point estimators offered by the Poisson CUSUM and EWMA control charts relative to that achieved by a maximum likelihood estimator for the process change point. Results indicate that the relative performance achieved by each change point estimator is a function of the corresponding control chart design. Relative mean index plots are provided to enable users of these control charts to choose a control chart design and change point estimator combination that will yield robust change point estimation performance across a range of potential change magnitudes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号