首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, we offer a new adapted model with three parameters, called Zubair Lomax distribution. The new model can be very useful in analyzing and modeling real data and provides better fits than some others new models. Primary properties of the Zubair Lomax model are determined by moments, probability weighted moments, Renyi entropy, quantile function and stochastic ordering, among others. Maximum likelihood method is used to estimate the population parameters, owing to simple random sample and ranked set sampling schemes. The behavior of the maximum likelihood estimates for the model parameters is studied using Monte Carlo simulation. Criteria measures including biases, mean square errors and relative efficiencies are used to compare estimates. Regarding the simulation study, we observe that the estimates based on ranked set sampling are more efficient compared to the estimates based on simple random sample. The importance and flexibility of Zubair Lomax are validated empirically in modeling two types of lifetime data.  相似文献   

2.
This paper presents some techniques for the realistic treatment of stochastic lead-time-demands in inventory models. First, we suggest the use of the first four moments to describe the diversified distribution forms of the lead times and the daily demands; and formulas for deriving a lead-time-demand's first four moments are presented. Next, we demonstrate the use of a lead-time-demand's first four moments in conjunction with the Johnson et al.'s tables to obtain various probability estimates. Thirdly, we discuss the use of the versatile four-parameter Schmeiser-Deutsch curves to fit a lead-time-demand distribution. The computational advantages of using the fitted Schmeiser-Deutsch curves in solving inventory models are then illustrated.  相似文献   

3.
A method is presented to estimate the process capability index (PCI) for a set of non‐normal data from its first four moments. It is assumed that these four moments, i.e. mean, standard deviation, skewness, and kurtosis, are suitable to approximately characterize the data distribution properties. The probability density function of non‐normal data is expressed in Chebyshev–Hermite polynomials up to tenth order from the first four moments. An effective range, defined as the value for which a pre‐determined percentage of data falls within the range, is solved numerically from the derived cumulative distribution function. The PCI with a specified limit is hence obtained from the effective range. Compared with some other existing methods, the present method gives a more accurate PCI estimation and shows less sensitivity to sample size. A simple algebraic equation for the effective range, derived from the least‐square fitting to the numerically solved results, is also proposed for PCI estimation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
Practical solutions to complicated sampling problems can often be found through the use of polynomial approximators. The approximators can yield exact expressions for (approximate) statistical properties, such as moments, obviating the need for sampling; on the other hand, should sampling be used, the approximators can be applied as control variates to sharpen Monte Carlo results. In this paper we give exact expressions for the first three moments of second order polynomials of independent and identically distributed random variables. The utility of these polynomials is illustrated in three examples: The first concerns the evaluation of the moments of the sample variance, the second investigates properties of a present value when interest rates vary, and the third involves control variates in a complicated sampling problem with nonlinear regression.  相似文献   

5.
The stochastic fluctuations in the number of disintegrations, which had already been studied experimentally by Rutherford and other investigators at the beginning of the twentieth century, make estimation of net counting rates in the presence of background counts a challenging statistical problem. Exact and approximate Bayesian estimates of net count rates using Poisson and normal distributions for the number of counts detected during varying counting intervals are derived. The posterior densities for the net count rate are derived and plotted for uniform priors. The graphs for the exact, Poisson based, and for the approximate posterior densities of the background and net count rates, resulting from the normal approximation to the Poisson distribution, were compared. No practical differences were found when the number of observed gross counts is large. Small numerical differences in the posterior expectations and standard deviations of the counting rates appeared when the number of observed counts was small. A table showing some of these numerical differences for different background and gross counts is included. A normal approximation to the Poisson is satisfactory for the analysis of counting data when the number of observed counts is large. Some caution has to be exercised when the number of observed counts is small.  相似文献   

6.
In survey sampling, policy decisions regarding allocation of resources to subgroups, called small areas, or determination of subgroups with specific properties in a population are based on reliable estimates of small area parameters. However, the information is often collected at a different scale than these subgroups. Hence, we need to estimate characteristics of subgroups based on the coarser scale data. One of the main interests in small area estimation is to produce an ensemble of small area parameters whose distribution across small areas is close to the corresponding distribution of true parameters. In this paper, we consider the unit-level nested error linear regression model which is commonly used in small area estimation. We study the case where the covariate in the model is assumed to have measurement error. To study this complex model, we propose to use constrained Bayes method to estimate the true covariate to build the small area Bayes predictor. We also provide some measures of performance such as sensitivity, specificity, and positive/negative predictive values for the constructed Bayes predictor. We estimate the model parameters using the method of moments and Bayesian approach to get corresponding empirical and hierarchical Bayes predictors. The performance of our proposed approach is evaluated through a simulation study and a real data application.  相似文献   

7.
The reference prior algorithm (Berger and Bernardo, 1992) is applied to location-scale models with any regular sampling density. A number of two-sample problems is analyzed in this general context, extending the difference, ratio and product of Normal means problems outside Normality, while explicitly considering possibly different sizes for each sample. Since the reference prior turns out to be improper in all cases, we examine existence of the resulting posterior distribution and its moments under sampling from scale mixtures of Normals. In the context of an empirical example, it is shown that a reference posterior analysis is numerically feasible and can display some sensitivity to the actual sampling distributions. This illustrates the practical importance of questioning the Normality assumption. The first author holds a Research Training Grant (ERBFMBICT 961021) under the Training and Mobility of Researchers Programme, financed by the European Commission.  相似文献   

8.
Sequential tolerance control (STC) is a tolerance control methodology used in discrete parts manufacturing. Recently, an adaptive sphere‐fitting method for STC (ASF–STC) was developed to account for potential skewness in manufacturing operations' distributions, a factor not considered in conventional STC. ASF–STC offers significant improvements over conventional STC when such skewness exists. The direction of skewness of an operations' distribution is a necessary input to ASF–STC. Thus, a novel approach to determining the skewness of a distribution for small sample sizes is presented here. ASF–STC has an additional requirement of distribution information for each operation. The beta distribution is an ideal candidate here, as it is very flexible in shape. The literature on four‐parameter beta estimation is very limited, and their performance for small sample sizes is poor. STC was designed for low‐volume production, thus the estimation for small sample sizes is necessary here. This study presents a heuristic, based on the method of moments estimates for a beta distribution, that estimates the four parameters for a beta distribution with small sample size. Several computational results are provided to compare this heuristic to the best‐known procedure, with the heuristic found to perform better for the test problems considered. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
This paper is concerned primarily with the method of moments in dissecting a mixture of two normal distributions. In the general case, with two means, two standard deviations, and a proportionality factor to be estimated, the first five sample moments are required, and it becomes necessary to find a particular solution of a ninth degree polynomial equation that was originally derived by Karl Pearson [10]. A procedure which circumvents solution of the nonic equation and thereby considerably reduces the total computational effort otherwise required, is presented. Estimates obtained in the simpler special case in which the two standard deviations are assumed to be equal, are employed as first approximations in an iterative method for simultaneously solving the basic system of moment equations applicable in the more general case in which the two standard deviations are unequal. Conditional maximum likelihood and conditional minimum chi-square estimation subject to having the first four sample moments equated to corresponding population moments, are also considered. An illustrative example is included.  相似文献   

10.
It is known that the probability distribution satisfy the Maximum Entropy Principle (MEP) if the available data consist in four moments of probability density function. Two problems are typically associated with use of MEP: the definition of the range of acceptable values for the moments Mi; the evaluation of the coefficients aj. Both problems have already been accurately resolved by analytical procedures when the first two moments of the distribution are known.

In this work, the analytical solution in the case of four known moments is provided and a criterion for confronting the general case (whatever the number of known moments) is expounded. The first four moments are expressed in nondimensional form through the expectation and the coefficients of variation, skewness and kurtosis. The range of their acceptable values is obtained from the analytical properties of the differential equations which govern the problem and from the Schwarz inequality.  相似文献   


11.
Much research effort has recently been focused on methods to deal with non‐normal populations. While for weak non‐normality the normal approximation is a useful choice (as in Shewhart control charts), moderate to strong skewness requires alternative approaches. In this short communication, we discuss the properties required from such approaches, and revisit two new ones. The first approach, for attributes data, assumes that the mean, the variance and the skewness measure can be calculated. These are then incorporated in a modified normal approximation, which preserves these moments. Extension of the Shewhart chart to skewed attribute distributions (e.g. the geometric distribution) is thus achieved. The other approach, for variables data, fit a member of a four‐parameter family of distributions. However, unlike similar approaches, sample estimates of at most the second degree are employed in the fitting procedure. This has been shown to result in a better representation of the underlying (unknown) distribution than methods based on four‐moment matching. Some numerical comparisons are given. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
A simplified biokinetic model for (137)Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods.  相似文献   

13.
Given a sample of particles from cascade impactors the geometric mean and variance are commonly estimated by plotting cumulative particle size as a function of particle size on log probability paper, assuming a log normal distribution and drawing a line. Here, theoretical estimates are described base upon the likelihood of the sample, a simple numerical method is described to obtain these estimates.  相似文献   

14.
Little published information is available about moments, other than the first, of the distribution of the sample number in Wald type sequential probability ratio tests. Empirical evidence from Monte Carlo studies is presented for the inference that the variance of the sample number is approximately proportional to the square of the average sample number.  相似文献   

15.
In general, the exact probability distribution of a definite integral of a given non-Gaussian random field is not known. Some information about this unknown distribution can be obtained from the 3rd and 4th moment of the integral. Approximations to these moments can be calculated by discretizing the integral and replacing the integrand by third-degree polynomials of correlated Gaussian variables which reproduce the first four moments and the correlation function of the field correctly. The method described (see Ditlevsen O, Mohr G, Hoffmeyer P. Integration of non-Gaussian fields. Probabilistic engineering mechanics, 1996) based on these ideas is discussed and further developed and used in a computer program which produces fairly accurate approximations to the mentioned moments with no restrictions put on the weight function applied to the field and the correlation function of the field. A pathological example demonstrating the limitations of the method is given.  相似文献   

16.
A reliability analysis is described in which the sample moments of the design variables are used to define the first four moments of a function representing the condition of failure, or malfunction, in the space of the design variables. These moments are then employed in transforming the failure function to a space of a normal random variable so permitting an estimate of failure probability to be made. A four-moment Reliability Index is defined and some examples of the technique are provided. The extension of the technique to stationary stochastic processes, for time-dependent problems, is discussed, and some alternative transformation procedures are compared.  相似文献   

17.
In this article we compute the expected Fisher information and the asymptotic variance–covariance matrix of the maximum likelihood estimates based on a progressively type II censored sample from a Weibull distribution by direct calculation as well as the missing-information principle. We then use these values to determine the optimal progressive censoring plans. Three optimality criteria are considered, and some selected optimal progressive censoring plans are presented according to these optimality criteria. We also discuss the construction of progressively censored reliability sampling plans for the Weibull distribution. Three illustrative examples are provided with discussion.  相似文献   

18.
A. W. Matz 《技术计量学》2013,55(4):475-484
The quartic exponential (QE) distribution defined by the probability density function of the type

is examined in detail.

The problem of obtaining maximum likelihood point estimates of the population parameters reduces to that of identifying the α as functions of the population moments μ r ′, r = 1, 2.3.4.

The invalidity is explained of methods proposed by previous authors to deal with the nonlinear relationships involved, and a new algorithm is developed which overcomes these objections. The new algorithm is applied to practical data, and the resulting distributions fitted to observed frequencies are shown to compare favourably with those obtained by previous Methods.  相似文献   

19.
The paper presents a general approach to the estimation of the quantile function of a non-negative random variable using the principle of minimum cross-entropy (CrossEnt) subject to constraints specified in terms of expectations of order statistics estimated from observed data.Traditionally CrossEnt is used for estimating the probability density function under specified moment constraints. In such analyses, consideration of higher order moments is important for accurate modelling of the distribution tail. Since the higher order (>2) moment estimates from a small sample of data tend to be highly biased and uncertain, the use of CrossEnt quantile estimates in extreme value analysis is fairly limited.The present paper is an attempt to overcome this problem via the use of probability weighted moments (PWMs), which are essentially the expectations of order statistics. In contrast with ordinary statistical moments, higher order PWMs can be accurately estimated from small samples. By interpreting a PWM as the moment of quantile function, the paper derives an analytical form of quantile function using the CrossEnt principle. Monte Carlo simulations are performed to assess the accuracy of CrossEnt quantile estimates obtained from small samples.  相似文献   

20.
A method of estimating the probability density function and cumulative distribution function when only the ordinary or central moments of the distribution are known is examined. The technique is used in conjunction with previous work which yields the ordinary moments of time to first passage failure to obtain accurate estimates of the failure probability for two representative oscillators. The results are then compared to those obtained by a nearly exact numerical scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号