首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
The statistical probability distribution of data should be known in advance, so that we can make some statistical inference based on the data and realize what information the data provides. Until now, a nonparametric goodness of fit test has been widely used in probability distribution recognition. However, such a procedure cannot guarantee a precise distribution recognition when only small data samples are available. In addition, the number of the divided groups will influence the results. This study proposes a neural network-based approach for probability distribution recognition. Two types of neural networks, backpropagation and learning vector quantization, are used in classifying normal, exponential, Weibull, Uniform, Chi-square, t, F, and Lognormal distributions. Implementation results demonstrate that the proposed approach outperforms the traditional statistical approach.  相似文献   

2.
Abstract

This study considers the complete Chi‐Square goodness‐of‐fit test procedures and numerical analysis to improve the intermediate value method for an approximate solution of the cumulative distribution function. The goodness‐of‐fit test is applied for evaluating the appropriate distribution of frequency analysis for estimating annual maximum flows of hydrologic data, from four measurement stations (Lounung, Yuemei, Santimem, and Kaoping stations) located in the Kaoping River Basin. Analytical results indicate that the error percentage of the Chi‐Square statistical values is improved by from 0.3 to 38.3%. Thus, the improved estimation of the Chi‐Square goodness‐of‐fit test procedure increases the accuracy of some frequency distributions.  相似文献   

3.
In this paper, a new extension of Lomax distribution called the exponentiated power Lomax (EPOLO) distribution is proposed. The proposed model accommodates monotonically increasing, decreasing, and unimodal hazard rates. A full study of the proposed model parameters using four techniques is introduced and a simulation study is performed to examine the performance of the four methods of estimation for both complete and censored data. EPOLO distribution is utilized to fit the number of revolutions of ball bearings, the tumor size of lung cancer patients, and the confirmed total deaths of the COVID-19 in Egypt and it provides better fits than some well-known distributions.  相似文献   

4.
Based on the goodness of fit approach, a new test is presented for testing exponentiality against the unknown age (used better than aged in convex ordering (UBAC)) Class of life distributions. The percentiles of this test are tabulated for sample sizes n=5(5)40. It is shown that the proposed test is simple, has high relative efficiency for some commonly used alternatives and enjoys a good power. An example in medical science is considered as a practical application of the proposed test. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
The methods described in an earlier article devoted to control methods for two related variables are extended to the case of more than two related variables. The concept of matrix notation is introduced because of the resultant simplification in multivariate analysis and the original two-variable problem is restated in matrix form. The method of principal components is introduced both as a method of charscterizing a multivariate process and as a control tool associated with control procedures. These methods are illustrated with a numerical example from the field of ballistic missiles. Approximate multivariate techniques, designed to simplify the administration of these control programs, are also discussed.  相似文献   

6.
Profile monitoring is an approach in quality control best used where the process data follow a profile (or curve). The majority of previous studies in profile monitoring focused on the parametric (P) modeling of either linear or nonlinear profiles, with both fixed and random effects, under the assumption of correct model specification. More recently, in the absence of an obvious P model, nonparametric (NP) methods have been employed in the profile monitoring context. For situations where a P model is adequate over part of the data but inadequate of other parts, we propose a semiparametric procedure that combines both P and NP profile fits. We refer to our semiparametric procedure as mixed model robust profile monitoring (MMRPM). These three methods (P, NP and MMRPM) can account for the autocorrelation within profiles and treat the collection of profiles as a random sample from a common population. For each approach, we propose a version of Hotelling's T2 statistic for use in Phase I analysis to determine unusual profiles based on the estimated random effects and obtain the corresponding control limits. Simulation results show that our MMRPM method performs well in making decisions regarding outlying profiles when compared to methods based on a misspecified P model or based on NP regression. In addition, however, the MMRPM method is robust to model misspecification because it also performs well when compared to a correctly specified P model. The proposed chart is able to detect changes in Phase I data and has easily calculated control limits. We apply all three methods to the automobile engine data of Amiri et al.5 and find that the NP and the MMRPM methods indicate signals that did not occur in a P approach. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
We study the two-parameter maximum likelihood estimation (MLE) problem for the Weibull distribution with consideration of interval data. Without interval data, the problem can be solved easily by regular MLE methods because the restricted MLE of the scale parameter β for a given shape parameter α has an analytical form, thus α can be efficiently solved from its profile score function by traditional numerical methods. In the presence of interval data, however, the analytical form for the restricted MLE of β does not exist and directly applying regular MLE methods could be less efficient and effective. To improve efficiency and effectiveness in handling interval data in the MLE problem, a new approach is developed in this paper. The new approach combines the Weibull-to-exponential transformation technique and the equivalent failure and lifetime technique. The concept of equivalence is developed to estimate exponential failure rates from uncertain data including interval data. Since the definition of equivalent failures and lifetimes follows EM algorithms, convergence of failure rate estimation by applying equivalent failures and lifetimes is mathematically proved. The new approach is demonstrated and validated through two published examples, and its performance in different conditions is studied by Monte Carlo simulations. It indicates that the profile score function for α has only one maximum in most cases. Such good characteristic enables efficient search for the optimal value of α.  相似文献   

8.
We introduce two families of statistics, functionals of the empirical moment generating function process of the logarithmically transformed data, for testing goodness of fit to the two-parameter Weibull distribution or, equivalently, to the type I extreme value model. We show that when affine invariant estimators are used for the parameters of the extreme value distribution, the distributions of these statistics to not depend on the underlying parameters and one of them has a limiting chi-squared distribution. We estimate, via simulations, some finite sample quantiles for the statistics introduced and evaluate their power against a varied set of alternatives.  相似文献   

9.
This paper presents maximum likelihood theory for large-sample optimum accelerated life test plans. The plans are used to estimate a simple linear relationship between (transformed) stress and product life, which has a Weibull or smallest extreme value distribution. Censored data are to be analyzed before all test units fail. The plans show that all test units need not run to failure and that more units should be tested at low test stresses than at high ones. The plans are illustrated with a voltage-accelerated life test of an electrical insulating fluid.  相似文献   

10.
Multivariate count data are popular in the quality monitoring of manufacturing and service industries. However, seldom effort has been paid on high‐dimensional Poisson data and two‐sided mean shift situation. In this article, a hybrid control chart for independent multivariate Poisson data is proposed. The new chart was constructed based on the test of goodness of fit, and the monitoring procedure of the chart was shown. The performance of the proposed chart was evaluated using Monte Carlo simulation. Numerical experiments show that the new chart is very powerful and sensitive at detecting both positive and negative mean shifts. Meanwhile, it is more robust than other existing multiple Poisson charts for both independent and correlated variables. Besides, a new standardization method for Poisson data was developed in this article. A real example was also shown to illustrate the detailed steps of the new chart. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
This article concerns datasets in which variables are in the form of intervals, which are obtained by aggregating information about variables from a larger dataset. We propose to view the observed set of hyper-rectangles as an empirical histogram, and to use a Gaussian kernel type estimator to approximate its underlying distribution in a nonparametric way. We apply this idea to both univariate density estimation and regression problems. Unlike many existing methods used in regression analysis, the proposed method can estimate the conditional distribution of the response variable for any given set of predictors even when some of them are not interval-valued. Empirical studies show that the proposed approach has a great flexibility in various scenarios with complex relationships between the location and width of intervals of the response and predictor variables.  相似文献   

12.
Group sparse approaches to regression modeling are finding ever increasing utility in an array of application areas. While group sparsity can help assess certain data structures, it is desirable in many instances to also capture element-wise sparsity. Recent work exploring the latter has been conducted in the context of l2/l1 penalized regression in the form of the sparse group lasso (SGL). Here, we present a novel model, called the sparse group elastic net (SGEN), which uses an l/l1/ridge-based penalty. We show that the l-norm, which induces group sparsity is particularly effective in the presence of noisy data. We solve the SGEN model using a coordinate descent-based procedure and compare its performance to the SGL and related methods in the context of hyperspectral imaging in the presence of noisy observations. Supplementary materials for this article are available online.  相似文献   

13.
《Quality Engineering》2007,19(4):281-297
The use of Benford's digital law for data conformance testing has recently received growing interest, particularly in business auditing. A closer examination shows that the currently propagated approaches are questionable: i) The theoretical foundations consist in asymptotic laws and approximation theorems which give no constructive guidelines for industrial application. ii) The statistical methods used are not well-matched to the asymptotic and approximative character of Benford's law. Starting from a close analysis of these aspects, the paper suggests an approach to more appropriate statistical methodology for digital analysis.  相似文献   

14.
The aim of this paper is to explore some features and possible uses of the posterior predictivep-value for the problem of goodness of fit. First, the behaviour of the posterior predictivep-value is compared with the behaviour of the classicalp-value through some interesting examples. Then, we consider a decision problem for simultaneously deciding to accept/reject a modelM and to accept/reject a null hypothesis (if we have accepted the modelM); the posterior predictivep-value is used for estimating the posterior probability of the model. Research partially supported by DGESIC (Spain) under grant number PB97-0021.  相似文献   

15.
A program for identifying the form of the distribution law of a random quantity is described. The dependence of the optimum number of intervals when constructing histograms from a volume of the sample (10–100,000) and of the kurtosis (in the limits 2–9.65) is determined by computer modeling. Using 50 different functions, the program can also model random quantities with unimodal and bimodal distribution laws. __________ Translated from Izmeritel’naya Tekhnika, No. 5, pp. 9–14, May, 2007.  相似文献   

16.
Survival of manufacturing materials or component parts may depend on two or more variables, and yet joint strength tests can be difficult to perform. For many such materials, multiple strength properties can be estimated only using destructive testing. This problem gives rise to a technique called proof loading—stressing units up to a prescribed load, destroying only the weaker units, and leaving the survivors for further tests. We propose a distribution-free Bayesian approach for estimating the probability of joint failure under two proof loads. Our method does not assume that proof-load survivors are undamaged.  相似文献   

17.
A robust F-criterion for checking the uniformity with respect to the variances of two non-Gaussian samples is obtained and investigated. Theoretical algorithms are developed and considered. The effectiveness and efficiency of the proposed criterion for known distribution kurtoses are illustrated using specific examples.__________Translated from Izmeritelnaya Tekhnika, No. 2, pp. 9–12, February, 2005.  相似文献   

18.
Pool scrubbing occurring in the containment of a nuclear reactor is an important mechanism for radioactive particle removal. In this article, an approximate analytical solution to the equation of particle removal with any possible combination of removal processes is suggested. Assuming that the particle size distribution can be approximated by a log-normal function, the reduction of total particle number and mass, and the changes in average particle size and polydispersity are expressed as explicit functions of their initial values, time, and the removal mechanism parameters. The solution is then applied to a pool scrubbing problem as an application example. The approximate solution derived in this study shows good agreement with the exact solution. The error, caused by an approximation for the slip correction factor, is potentially large only for the intermediate-size particles. However, the absolute error still remains small because the particle removal rate is minimum in that size range. The methodology adopted and the solution derived in this paper will be useful when only particle size distribution parameters are available without full information of whole size distribution.  相似文献   

19.
为提高当前慢性病防控体系的效率,更好地遏制慢性病的流行,保护公众健康,2015年中国工程院设立了"卫生经济学应用于慢性病防控决策的战略研究"重大咨询项目。课题组调研发现,作为一种重要的卫生决策工具,卫生经济学在慢性病防控决策过程中尚处于初级应用阶段。对卫生经济学的重要性认识不足,对卫生经济学的方法的掌握和应用能力有限,制约了其在慢性病防控决策中的应用,因此建议建立多元主体参与的卫生经济学研究力量,加强卫生经济学应用于慢性病防控决策的大数据积累及应用研究,提出将卫生经济学应用于我国慢性病防控决策的战略框架。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号