首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce a generalization of the standard Planck distribution discussed by Johnson and Kotz (1970, Section 33.6.1). This generalization results in a very flexible family which contains the gamma distribution as a particular case. In this paper we provide a comprehensive treatment of the mathematical properties of the family. We derive expressions for thenth moment, moment generating function, characteristic function, mean deviation about the mean, mean deviation about the median, Rényi entropy, Shannon entropy and the asymptotic distribution of the extreme order statistics. Estimation and simulation issues are also considered.  相似文献   

2.
最大信息熵方法是基于概率分布评定测量不确度的主要方法之一。其所依赖的高阶矩需要较大样本的测量数据,而校准/检测实验室的测量一般为小样本,故用最大熵方法评定小样本测量不确定度缺乏一定的可靠性。提出了基于分位数函数和概率权重矩作为约束条件的最大信息熵不确定度评定法,把矩的计算从高次降为一次,并结合遗传算法求解概率分布,用Bootstrap分布估计扩展不确定度和包含区间,解决了由分位数区间估计分布不对称所致的复杂计算问题。  相似文献   

3.
The maximum entropy principle constrained by probability weighted moments is an useful technique for unbiasedly and efficiently estimating the quantile function of a random variable from a sample of complete observations. However, censored or incomplete data are often encountered in engineering reliability and lifetime distribution analysis. This paper presents a new distribution free method for the estimation of the quantile function of a non-negative random variable using a censored sample of data, which is based on the principle of partial maximum entropy (MaxEnt) in which partial probability weighted moments (PPWMs) are used as constraints. Numerical results and practical examples presented in the paper confirm the accuracy and efficiency of the proposed partial MaxEnt quantile function estimation method for censored samples.  相似文献   

4.
This paper presents three new computational methods for calculating design sensitivities of statistical moments and reliability of high‐dimensional complex systems subject to random input. The first method represents a novel integration of the polynomial dimensional decomposition (PDD) of a multivariate stochastic response function and score functions. Applied to the statistical moments, the method provides mean‐square convergent analytical expressions of design sensitivities of the first two moments of a stochastic response. The second and third methods, relevant to probability distribution or reliability analysis, exploit two distinct combinations built on PDD: the PDD‐saddlepoint approximation (SPA) or PDD‐SPA method, entailing SPA and score functions; and the PDD‐Monte Carlo simulation (MCS) or PDD‐MCS method, utilizing the embedded MCS of the PDD approximation and score functions. For all three methods developed, the statistical moments or failure probabilities and their design sensitivities are both determined concurrently from a single stochastic analysis or simulation. Numerical examples, including a 100‐dimensional mathematical problem, indicate that the new methods developed provide not only theoretically convergent or accurate design sensitivities, but also computationally efficient solutions. A practical example involving robust design optimization of a three‐hole bracket illustrates the usefulness of the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
A semi-analytical simulation method is proposed in this paper to assess system reliability of structures. Monte Carlo simulation with variance-reduction techniques, systematic and antithetic sampling, is employed to obtain the samples of the structural resistance in this method. Variance-reduction techniques make it possible to sufficiently simulate the structural resistance with less runs of structural analysis. When resistance samples and its moments determined, exponential polynomial method (EPM) is used to fit the probability density function of the structural resistance. EPM can provide the approximate distribution and statistical characteristic of the structural resistance and then the first-order second-moment method can be carried out to calculate the structural failure probability. Numerical examples are provided for a structural component and two ductile frames, which illustrate the method proposed facilitates the evaluation of system reliability in assessments of structural safety.  相似文献   

6.
We study basic properties for bivariate systems with exchangeable components and exponential conditional distributions which represent bi-component biological or engineering systems with structural dependency. This is equivalent to suppose that we have similar components with the bivariate exponential conditional joint distribution defined by Arnold and Strauss (1988). Specifically, we study the reliability functions, the moments, some aging measures, ordering and classification properties for series and parallel systems. Supported by Ministerio de Ciencia y Tecnología under grant BFM2003-02947  相似文献   

7.
q-对称熵损失函数下Gamma分布的尺度参数的估计   总被引:1,自引:1,他引:0  
本文在对称熵损失函数的基础上定义了q-对称熵损失函数,并用参数估计的方法研究了在q-对称熵损失函数下Gamma分布的尺度参数的最小风险同变估计(MRE)、贝叶斯(Bayes)估计、最小最大(Mininax)估计等。我们还对这些估计量的可容许性和不可容许性进行了讨论,最后分别对指数分布和Gamma分布在两种损失函数下的估计结果进行了数值比较。  相似文献   

8.
最大熵法可靠度理论在工程中的应用   总被引:2,自引:3,他引:2  
韦征  叶继红  沈世钊 《振动与冲击》2007,26(6):146-148,151
基于信息论中最大熵的概念,探讨了最大熵法的可靠度理论在工程中的应用。讨论了样本的均值、标准差对收敛性的影响,以及样本数量与最大熵法阶数对概率密度函数、超越概率、可靠指标精度的影响。提出了结构分析中应采用四阶矩法,并给出应用该方法时精度控制的若干建议。  相似文献   

9.
In this article, a new generalization of the inverse Lindley distribution is introduced based on Marshall-Olkin family of distributions. We call the new distribution, the generalized Marshall-Olkin inverse Lindley distribution which offers more flexibility for modeling lifetime data. The new distribution includes the inverse Lindley and the Marshall-Olkin inverse Lindley as special distributions. Essential properties of the generalized Marshall-Olkin inverse Lindley distribution are discussed and investigated including, quantile function, ordinary moments, incomplete moments, moments of residual and stochastic ordering. Maximum likelihood method of estimation is considered under complete, Type-I censoring and Type-II censoring. Maximum likelihood estimators as well as approximate confidence intervals of the population parameters are discussed. A comprehensive simulation study is done to assess the performance of estimates based on their biases and mean square errors. The notability of the generalized Marshall-Olkin inverse Lindley model is clarified by means of two real data sets. The results showed the fact that the generalized Marshall-Olkin inverse Lindley model can produce better fits than power Lindley, extended Lindley, alpha power transmuted Lindley, alpha power extended exponential and Lindley distributions.  相似文献   

10.
E. YARIMER 《工程优选》2013,45(1-3):165-181
The reliability of multi-element, fatigue-prone systems subjected to cyclic, quasi-static loading is considered. The element times-to-failure have independent Weibull distributions. The measure of reliability is the beta index for the number of cycles to failure of the system. A dominant failure path is determined by minimizing beta over all possible sequences of element failures, using the Dynamic Programming technique as formulated for the Travelling Salesman problem. Evaluation of the beta index requires the first two moments of the Weibull-distributed random variable, conditioned on the element having already survived some number of cycles. A convenient method for calculating these conditional moments is described. The paper concludes with numerical examples (some of which provide evidence of the lack of monotonicity ofthe objective function) and some remarks on possible ways of improving the computational efficiency.  相似文献   

11.
为研究桥墩非线性地震响应下的抗震可靠度,引入随机函数-谱表示模型与高阶矩法,提出了基于结构响应极值前四阶矩的桥墩抗震可靠度分析方法。考虑三线型恢复力模型,建立了桥墩的单墩模型;利用随机函数-谱表示模型生成非平稳地震加速度时程样本并对桥墩进行非线性时程分析,在此基础上,建立了结构响应极值前四阶矩(均值,标准差,偏度和峰度)的计算框架;最后,考虑桥墩位移界限,给出了桥墩位移的功能函数,进而利用高阶矩法计算桥墩抗震可靠指标。通过对桥墩结构分析,验证了该方法的高效性与精确性;计算结果表明:与Monte Carlo模拟结果相比,该方法计算的前四阶矩、抗震可靠指标(失效概率)的最大相对误差分别为0.28%,1.92%(4.92%),该方法为桥墩抗震可靠度评估提供了一种有效的途径。  相似文献   

12.
Calculation of probability of exceedance for nonstationary non-Gaussian responses remains a great challenge to researchers in the field of structural reliability. In this paper, an analytical solution is proposed for calculating the mean upcrossing rate (MCR) of the non-stationary non-Gaussian responses by approximating the displacement and velocity responses with the bivariate vector translation process, in which the unified Hermite polynomial model (UHPM) is selected as the mapping function. The first four moments (i.e., mean value, standard deviation, skewness, and kurtosis) and cross-correlation function of the displacement and velocity responses needed in UHPM are estimated from some representative samples generated by random function-spectral representation method (RFSRM) and time-domain analysis. Under the Poisson assumption of the upcrossing events, the calculation of extreme value distribution or probability of exceedance for structural response can be determined with the proposed method. The proposed method is applicable to a wide range of structural responses, including asymmetric and hardening or softening responses. Three numerical examples are provided to demonstrate the efficiency and accuracy of the proposed method. It can be concluded that the proposed method provides an accurate and useful tool for dynamic reliability assessment in engineering applications.  相似文献   

13.
This paper proposes an analytical solution for fast tolerance analysis of the assembly of components with a mean shift or drift in the form of a doubly-truncated normal distribution. The assembly of components with a mean shift or drift in the form of a uniform distribution (the Gladman model) can be calculated by this method as well since the uniform distribution is a special form of the doubly-truncated normal distribution. Integration formulae of the first four moments of the truncated normal distribution are first derived. The first four moments of the resultant tolerance distribution are then calculated. As a result, the resultant tolerance specification is represented as a function of the standard deviation and the coefficient of kurtosis of the resultant distribution. Based on this method, the calculated resultant tolerance specification is more accurate than that predicted by the Gladman's model or the simplified truncated normal model. The difference between this model and the Monte Carlo method with 1,000,000 simulation samples is less than 0.5%. The merit of the proposed method is that it is fast and accurate which is crucial for engineering applications in tolerance analysis.  相似文献   

14.
In this paper, we proposed the Bayesian exponentially weighted moving average (EWMA) control charts for mean under the nonnormal life time distributions. We used the time between events data which follow the Exponential distribution and proposed the Bayesian EWMA control charts for Exponential distribution and transformed Exponential distributions into Inverse Rayleigh and Weibull distributions. In order to develop the control charts, we used a uniform prior under five different symmetric and asymmetric loss functions (LFs), namely, squared error loss function (SELF), precautionary loss function (PLF), general entropy loss function (GELF), entropy loss function (ELF), and weighted balance loss function (WBLF). The average run length (ARL) and the standard deviation of run length (SDRL) are used to check the performance of the proposed Bayesian EWMA control charts for Exponential and transformed Exponential distributions. An extensive simulation study is conducted to evaluate the proposed Bayesian EWMA control chart for nonnormal distributions. It is observed from the results that the proposed control chart with the Weibull distribution produces the best results among the considered distributions under different LFs. A real data example is presented for implementation purposes.  相似文献   

15.
We study the influence of production on utilization functions. A concrete example of this is the influence of the growth of literature on the obsolescence (aging) of this literature. Here, synchronous as well as diachronous obsolescence is studied. Assuming an increasing exponential function for production and a decreasing one for aging, we show that, in the synchronous case, the larger the increase in production, the larger the obsolescence. In the diachronous case the opposite relation holds: the larger the increase in production the smaller the obsolescence rate. This has also been shown previously byEgghe but the present proof is shorter and yields more insight in the derived results. If a decreasing exponential function is used to model production the opposite results are obtained. It is typical for this study that there are two different time periods: the period of production (growth) and — per year appearing in the production period — the period of aging (measured synchronously and diachronously). The interaction of these periods is described via convolutions (discrete as well as continuous).  相似文献   

16.
The principle of maximum entropy (in its classical form), successfully applied in many fields (e.g. statistics, reliability, estimation), has recently been extended to analyze the systems governed by stochastic differential equations and especially to determining the stationary probability distribution of the solution process. In this paper we develop the maximum entropy approach to characterize non-stationary probability distributions of the solutions of stochastic systems. The variational problem for the entropy functional includes time-dependent constraints in the form of differential equations for moments. The general scheme of the method is given along with the effective treatment of a number of first and second order stochastic systems. The maximum entropy probability distributions are compared with the exact solutions and with the simulation results.  相似文献   

17.
Using gaussian quadrature we can find m concentrations of probability that replace the density function of a random variable X and match 2m - 1 of its moments. This reduces a probabilistic analysis to m deterministic ones. Even small values of m provide excellent accuracy in many practical circumstances. When fewer than 2m - 1 moments are known there is arbitrariness in the choice of the concentrations, which is overcome by resorting to the maximum entropy formalism. Its use is here systematized for the case in which αXb and we know N moments of the density of X, so that calculation of N - 1 integrals suffices for finding the density function and any number of its moments. The approach is illustrated for m = 2 and 3, N = 2, 3, α = 0, B = ∞ and graphs are provided for finding the equivalent concentrations.  相似文献   

18.
Parameter and Quantile Estimation for the Generalized Pareto Distribution   总被引:1,自引:0,他引:1  
The generalized Pareto distribution is a two-parameter distribution that contains uniform, exponential, and Pareto distributions as special cases. It has applications in a number of fields, including reliability studies and the analysis of environmental extreme events. Maximum likelihood estimation of the generalized Pareto distribution has previously been considered in the literature, but we show, using computer simulation, that, unless the sample size is 500 or more, estimators derived by the method of moments or the method of probability-weighted moments are more reliable. We also use computer simulation to assess the accuracy of confidence intervals for the parameters and quantiles of the generalized Pareto distribution.  相似文献   

19.
A method is presented to estimate the process capability index (PCI) for a set of non‐normal data from its first four moments. It is assumed that these four moments, i.e. mean, standard deviation, skewness, and kurtosis, are suitable to approximately characterize the data distribution properties. The probability density function of non‐normal data is expressed in Chebyshev–Hermite polynomials up to tenth order from the first four moments. An effective range, defined as the value for which a pre‐determined percentage of data falls within the range, is solved numerically from the derived cumulative distribution function. The PCI with a specified limit is hence obtained from the effective range. Compared with some other existing methods, the present method gives a more accurate PCI estimation and shows less sensitivity to sample size. A simple algebraic equation for the effective range, derived from the least‐square fitting to the numerically solved results, is also proposed for PCI estimation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

20.
The burn-in process is a part of the production process whereby manufactured products are operated for a short period of time before release. In this paper, a Bayesian method is developed for calculating the optimal burn-in duration for a batch of products whose life distribution is modeled as a mixture of two (denoted ‘strong’ and ‘weak’) exponential sub-populations. The criteria used is the minimization of a total expected cost function reflecting costs related to the burn-in process and to product failures throughout a warranty period. The expectation is taken with respect to the mixed exponential failure model and its parameters. The prior distribution for the parameters is constructed using a beta density for the mixture parameter and independent gamma densities for the failure rate parameters of the sub-populations. It is assumed that the optimal burn-in time is selected in advance and remains fixed throughout the burn-in process. When additional failure information is available prior to the burn-in process, the minimization of posterior total cost is used as the criteria for selecting the optimal burn-in time. Expressions for the joint posterior distribution and cost are provided for the case of both complete and truncated data. The method is illustrated with an example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号