首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
During a long industrial career the writer observed many occasions on which the neglect of statistical analysis of process capability by design engineers resulted in problems during production. In this article some basic statistical methods are demonstrated with elementary data in the hope of convincing doubting engineers that they are easy to use, simple to teach, and full of pragmatic common sense. This is followed by an account of various published sources which demonstrate the importance of statistical methods in engineering  相似文献   

2.
ABSTRACT

Images have been widely used in manufacturing applications for monitoring production processes, partly because they are often convenient and economic to acquire by different types of imaging devices. Medical imaging techniques, such as CT, PET, X-ray, ultrasound, magnetic resonance imaging (MRI), and functional MRI, have become a basic medical diagnosis tool nowadays. Satellite images are also commonly used for monitoring the changes of the earth’s surface. In all these applications, image comparison and monitoring are the common and fundamentally important statistical problems that should be addressed properly. In computer science, applied mathematics, statistics and some other disciplines, there have been many image processing methods proposed. In this article, I will discuss (i) a powerful statistical tool, called jump regression analysis (JRA), for modeling and analyzing images and other types of data with jumps and other singularities involved, (ii) some image processing problems and methods that are potentially useful for image comparison and monitoring, and (iii) some of my personal perspectives about image comparison and monitoring.  相似文献   

3.
Data collected during outbreaks are essential to better understand infectious disease transmission and design effective control strategies. But analysis of such data is challenging owing to the dependency between observations that is typically observed in an outbreak and to missing data. In this paper, we discuss strategies to tackle some of the ongoing challenges in the analysis of outbreak data. We present a relatively generic statistical model for the estimation of transmission risk factors, and discuss algorithms to estimate its parameters for different levels of missing data. We look at the problem of computational times for relatively large datasets and show how they can be reduced by appropriate use of discretization, sufficient statistics and some simple assumptions on the natural history of the disease. We also discuss approaches to integrate parametric model fitting and tree reconstruction methods in coherent statistical analyses. The methods are tested on both real and simulated datasets of large outbreaks in structured populations.  相似文献   

4.
The examination of product characteristics using a statistical tool is an important step in a manufacturing environment to ensure product quality. Several methods are employed for maintaining product quality assurance. Quality control charts, which utilize statistical methods, are normally used to detect special causes. Shewhart control charts are popular; their only limitation is that they are effective in handling only large shifts. For handling small shifts, the cumulative sum (CUSUM) and the exponential weighted moving average (EWMA) are more practical. For handling both small and large shifts, adaptive control charts are used. In this study, we proposed a new adaptive EWMA scheme. This scheme is based on CUSUM accumulation error for detection of wide range of shifts in the process location. The CUSUM features in the proposed scheme help with identification of prior shifts. The proposed scheme uses Huber and Tukey bisquare functions for an efficient shift detection. We have used average run length (ARL) as performance indicator for comparison, and our proposed scheme outperformed some of the existing schemes. An example that uses real‐life data is also provided to demonstrate the implementation of the proposed scheme.  相似文献   

5.
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method.  相似文献   

6.
总结、论述了作者在研制低噪声笼型感应电动机—塑料注射机专用笼型三相感应电动机过程中采用的控制电磁噪声的主要方法及其试验结果.文章还介绍了对有关试验数据的统计分析方法和结果.给出了一些有益的结论.  相似文献   

7.
In a research program aimed at the assessment of more comprehensive accident analysis methods, new applications of statistical analysis procedures to commercial vehicle accidents have been investigated, and exemplary results obtained [Philipson et al., 1978], A file of some 3000 specially-detailed California Highway Patrol accident reports from two areas of California during a period of about one year in 1975–1976 provided the unique data base for the application. Computer implementation and evaluation through statistical testing of the quality of the data file were first accomplished. Then an exhaustive univariate analysis of the data was conducted to describe the file in detail. Selected sets of dependent and independent variables were then subjected to analyses of association employing contingency table analysis methods. In several cases, acceptable log-linear models to explain the variables' association were thereby established. Vehicle exposure measured in vehicle miles traveled for each vehicle category was introduced into one of the analyses to assess its impact on the set of significant interactions; it was indeed found to be important, albeit accuracy in its estimation was problematical. This estimation was carried out by two independent methods; a “direct” procedure based on a series of linear extrapolations of basic State of California commercial vehicle traffic data, and an “induced” estimation procedure essentially employing only data in the accident reports. The results of the two methods exhibited some common trends, but otherwise differed considerably. The results of the research effort, highlighted in this article, indicated the value of the methods investigated, and so of the detailed accident report files necessary for their use. They also strongly illuminated the areas of greatest difficulty in the application of these methods, basically associated with accident data quality and exposure estimation accuracy, and general directions for their improvement.  相似文献   

8.
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.  相似文献   

9.
In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.  相似文献   

10.
Maier D  Marth M  Honerkamp J  Weese J 《Applied optics》1999,38(21):4671-4680
An important step in analyzing data from dynamic light scattering is estimating the relaxation time spectrum from the correlation time function. This estimation is frequently done by regularization methods. To obtain good results with this step, the statistical errors of the correlation time function must be taken into account [J. Phys. A 6, 1897 (1973)]. So far error models assuming independent statistical errors have been used in the estimation. We show that results for the relaxation time spectrum are better if correlation between statistical errors is taken into account. There are two possible ways to obtain the error sizes and their correlations. On the one hand, they can be calculated from the correlation time function by use of a model derived by Sch?tzel. On the other hand, they can be computed directly from the time series of the scattered light. Simulations demonstrate that the best results are obtained with the latter method. This method requires, however, storing the time series of the scattered light during the experiment. Therefore a modified experimental setup is needed. Nevertheless the simulations also show improvement in the resulting relaxation time spectra if the error model of Sch?tzel is used. This improvement is confirmed when a lattice with a bimodal sphere size distribution is applied to experimental data.  相似文献   

11.
Many multi‐axial fatigue limit criteria are formalized as a linear combination of a shear stress amplitude and a normal stress. To identify the shear stress amplitude, appropriate conventional definitions, as the minimum circumscribed circle (MCC) or ellipse (MCE) proposals, are in use. Despite computational improvements, deterministic algorithms implementing the MCC/MCE methods are exceptionally time‐demanding when applied to “coiled” random loading paths resulting from in‐service multi‐axial loadings and they may also provide insufficiently robust and reliable results. It would be then preferable to characterize multi‐axial random loadings by statistical re‐formulations of the deterministic MCC/MCE methods. Following an early work of Pitoiset et al., this paper presents a statistical re‐formulation for the MCE method. Numerical simulations are used to compare both statistical re‐formulations with their deterministic counterparts. The observed general good trend, with some better performance of the statistical approach, confirms the validity, reliability and robustness of the proposed formulation.  相似文献   

12.
This paper describes results of computer evaluation of concrete strength test data obtained during construction of the Khairabad bridge in Pakistan. The data were processed using a spreadsheet model the Complete Quality Control Report (CQC Report), a set of statistical tools, and the industry-developed Quality Control criteria. The Complete Quality Control Report was set up to check compliance of concrete with the specified strength requirements, to appraise control over batch-to-batch variations, and to determine the adequacy of testing procedures. Prior to evaluation, the data were analyzed for the presence of doubtful readings, normality, and for the belonging to the same population. The paper also gives some information regarding the structure of the Complete Quality Control Report and statistical methods used in the evaluation, with a particular emphasis on the usefulness of these methods. The work described in this paper is a continual upgrading of the Complete Quality Control Report through the use of a more advanced electronic spreadsheet program, selection of most informative statistical tools and adjustments to the industry-used Quality Control criteria.  相似文献   

13.
土木工程由于内在材料的离散性,结构体系的复杂性,外在环境因素的随机性,有必要引入统计方法来进行面向健康监测的损伤辨识。探讨了土木工程系统辨识的定义,分析了现有研究中与统计方法相关的Bayesian统计、随机有限元、统计模式辨识、损伤特征辨识的统计描述及其他等五类研究;根据土木工程系统辨识的特点与已有研究,展望了统计方法进一步的研究前景与方向。  相似文献   

14.
An outlier in an unreplicated factorial experiment is difficult to detect, and its presence reduces the power for detecting significant effects. This poses a problem for data analysts since methods for detecting outliers and testing effects in the presence of outliers are not available in popular statistical software. In this article we compare three methods that have been proposed in the literature for detecting outliers and testing effects for significance in the presence of an outlier. We illustrate the methods with data from a real experiment, comment on the ease of implementing them in standard statistical packages, and use a simulation study to compare their performance over a wider range of circumstances. We make recommendations about when each method should be used in practice.  相似文献   

15.
Image data plays an important role in manufacturing and service industries because each image can provide a huge set of data points in just few seconds with relatively low cost. Enhancement of machine vision systems during the time has led to higher quality images, and the use of statistical methods can help to analyze the data extracted from such images efficiently. It is not efficient from time and cost point of views to use every single pixel in an image to monitor a process or product performance effectively. In recent years, some methods are proposed to deal with image data. These methods are mainly applied for separation of nonconforming items from conforming ones, and they are rarely applied to monitor process capability or performance. In this paper, a nonparametric regression method using wavelet basis function is developed to extract features from gray scale image data. The extracted features are monitored over time to detect process out‐of‐control conditions using a generalized likelihood ratio control chart. The proposed approach can also be applied to find change point and fault location simultaneously. Several numerical examples are used to evaluate performance of the proposed method. Results indicate suitable performance of the proposed method in detecting out‐of‐control conditions and providing precise diagnostic information. Results also illustrate suitable performance of our proposed method in comparison with a competitive approach.  相似文献   

16.
The assessment of structural integrity data requires a statistical assessment. However, most statistical analysis methods make some assumption regarding the underlying distribution. Here, a new distribution-free statistical assessment method based on a combination of Rank and Bimodal probability estimates is presented and shown to result in consistent estimates of different probability quantiles. The method is applicable for any data set expressed as a function of two parameters. Data for more than two parameters can always be expressed as different subsets varying only two parameters. In principle, this makes the method applicable to the analysis of more complex data sets. The strength in the statistical analysis method presented lies in the objectiveness of the result. There is no need to make any subjective assumptions regarding the underlying distribution, or of the relationship between the parameters considered.  相似文献   

17.
A. Schubert  A. Telcs 《Scientometrics》1986,9(5-6):231-238
A new indicator, called thepublication potential, is proposed to measure scientific strength of different countries. The indicator is based onSCI author counts and publication frequency distributions. Not depending on national statistical reports, it avoids the ambiguities of statistical definitions and methods, thereby providing a solid ground for cross-national comparisons. Publication based and statistical survey data for 34 countries are compared and some of the most conspicuous discrepancies are pinpointed.  相似文献   

18.
Kirsten Bj  rkest  l  Tormod N  s 《Quality Engineering》2005,17(4):509-533
This article presents and compares different models for regression where some of the variables are mixture components. The interpretations of the parameters depend on the model used. It is shown that some of the models are particularly useful for providing information about possible effects. Recommendations and cautions regarding the use of some regression procedures in some standard statistical software packages are given. The methods can be used when the data do not come from an ideal statistical design. The article also discusses how it is possible to find combinations of the variables that will provide output with good predictive properties. An example from the carbon industry with several process variables and two different mixtures is used to illustrate some of the ideas.  相似文献   

19.
Functional data and profiles are characterized by complex relationships between a response and several predictor variables. Fortunately, statistical process control methods provide a solid ground for monitoring the stability of these relationships over time. This study focuses on the monitoring of 2‐dimensional geometric specifications. Although the existing approaches deploy regression models with spatial autoregressive error terms combined with control charts to monitor the parameters, they are designed based on some idealistic assumptions that can be easily violated in practice. In this paper, the independent component analysis (ICA) is used in combination with a statistical process control method as an alternative scheme for phase II monitoring of geometric profiles when non‐normality of the error term is present. The performance of this method is evaluated and compared with a regression‐ and PCA‐based approach through simulation of the average run length criterion. The results reveal that the proposed ICA‐based approach is robust against non‐normality in the in‐control analysis, and its out‐of‐control performance is on par with that of the PCA‐based method in case of normal and near‐normal error terms.  相似文献   

20.
Abstract: During usual data gathering, the statistical analysis efficiency strongly depends on the noise level superimposed on the signal. It has been found that some well known statistical tests, commonly utilised in data acquisition in order to detect the presence of drift, can fail under some conditions. Thus, a statistical procedure for the predictive reliability estimation of the utilised statistical method could be useful in the design of experimental analysis. This paper reports the results of a simulation study carried out to evaluate the performance in drift detection of non-parametric tests such as the Wald-Wolfowitz run test, in comparison with the Mann-Whitney, reverse arrangement test. In order to detect the sensitivity of the tests to evaluate a monotonous drift, a simulation program was developed. In the program a Gaussian raw data sequence with a linear pattern of variable slope and with variable variance was simulated and given as the input to the tests. The capability to detect the presence of drift as a function of angular coefficient and variance of the noise superimposed on the signal was verified. The obtained data were synthesised in graphs so that the experimentalist could determine preliminarily the effectiveness of each of the considered statistical methods in terms of percentage of success in detecting the presence of drift phenomena as a function of drift relevance and the noise amplitude. Finally, the graphs permitted the elucidation of the causes of contradictory failing results observed in long term experimental analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号