首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 946 毫秒
1.
A Bayesian approach is used to estimate and to find highest posterior density intervals for R(t)=P(X12#62;t, X22#62;t) when (X1, X2) follow the Gumbel bivariate exponential distribution. Because of the complexity of the likelihood function, numerical integration must be used and, for this setting, Jacobi and Laguerre rules are employed as they arise naturally. A data set from an application is used to illustrate the procedures.  相似文献   

2.
A finite mixture of gamma distributions [Finite mixture of certain distributions. Comm. Statist. Theory Methods 31(12), 2123-2137] is used as a conjugate prior, which gives a nice form of posterior distribution. This class of conjugate priors offers a more flexible class of priors than the class of gamma prior distributions. The usefulness of a mixture gamma-type prior and the posterior of uncertain parameters λ for the Poisson distribution are illustrated by using Markov Chain Monte Carlo (MCMC), Gibbs sampling approach, on hierarchical models. Using the generalized hypergeometric function, the method to approximate maximum likelihood estimators for the parameters of Agarwal and Al-Saleh [Generalized gamma type distribution and its hazard rate function. Comm. Statist. Theory Methods 30(2), 309-318] generalized gamma-type distribution is also suggested.  相似文献   

3.
In this paper the application of image prior combinations to the Bayesian Super Resolution (SR) image registration and reconstruction problem is studied. Two sparse image priors, a Total Variation (TV) prior and a prior based on the ?1 norm of horizontal and vertical first-order differences (f.o.d.), are combined with a non-sparse Simultaneous Auto Regressive (SAR) prior. Since, for a given observation model, each prior produces a different posterior distribution of the underlying High Resolution (HR) image, the use of variational approximation will produce as many posterior approximations as priors we want to combine. A unique approximation is obtained here by finding the distribution on the HR image given the observations that minimizes a linear convex combination of Kullback–Leibler (KL) divergences. We find this distribution in closed form. The estimated HR images are compared with the ones obtained by other SR reconstruction methods.  相似文献   

4.
This paper studies a heavy-tailed stochastic volatility (SV) model with leverage effect, where a bivariate Student-t distribution is used to model the error innovations of the return and volatility equations. Choy et al. (2008) studied this model by expressing the bivariate Student-t distribution as a scale mixture of bivariate normal distributions. We propose an alternative formulation by first deriving a conditional Student-t distribution for the return and a marginal Student-t distribution for the log-volatility and then express these two Student-t distributions as a scale mixture of normal (SMN) distributions. Our approach separates the sources of outliers and allows for distinguishing between outliers generated by the return process or by the volatility process, and hence is an improvement over the approach of Choy et al. (2008). In addition, it allows an efficient model implementation using the WinBUGS software. A simulation study is conducted to assess the performance of the proposed approach and its comparison with the approach by Choy et al. (2008). In the empirical study, daily exchange rate returns of the Australian dollar to various currencies and daily stock market index returns of various international stock markets are analysed. Model comparison relies on the Deviance Information Criterion and convergence diagnostic is monitored by Geweke’s convergence test.  相似文献   

5.
This study deals with the classical and Bayesian estimation of the parameters of a k-components load-sharing parallel system model in which each component's lifetime follows Lindley distribution. Initially, the failure rate of each of the k components in the system is h(t,θ1) until the first component failure. However, upon the first failure within the system, the failure rates of the remaining (k − 1) surviving components change to h(t,θ2) and remain the same until next failure. After second failure, the failure rates of (k − 2) surviving components change to h(t,θ3) and finally when the (k − 1)th component fails, the failure rate of the last surviving component becomes h(t,θk). In classical set up, the maximum likelihood estimates of the load share parameters, system reliability and hazard rate functions along with their standard errors are computed. 100 × (1 − γ)% confidence intervals and two bootstrap confidence intervals for the parameters have also been constructed. Further, by assuming Jeffrey's invariant and gamma priors of the unknown parameters, Bayes estimates along with their posterior standard errors and highest posterior density credible intervals of the parameters are obtained. Markov Chain Monte Carlo technique such as Metropolis–Hastings algorithm has been utilized to generate draws from the posterior densities of the parameters.  相似文献   

6.
The usual arithmetic operations on real numbers can be extended to arithmetical operations on fuzzy intervals by means of Zadeh’s extension principle based on a t-norm T. A t-norm is called consistent with respect to a class of fuzzy intervals for some arithmetic operation, if this arithmetic operation is closed for this class. It is important to know which t-norms are consistent with particular types of fuzzy intervals. Recently, Dombi and Gy?rbíró [J. Dombi, N. Gy?rbíró, Additions of sigmoid-shaped fuzzy intervals using the Dombi operator and infinite sum theorems, Fuzzy Sets and Systems 157 (2006) 952-963] proved that addition is closed if the Dombi t-norm is used with sigmoid-shaped fuzzy intervals. In this paper, we define a broader class of sigmoid-shaped fuzzy intervals. Then, we study t-norms that are consistent with these particular types of fuzzy intervals. Dombi and Gy?rbíró’s results are special cases of the results described in this paper.  相似文献   

7.
We study a new warranty policy for non-repairable products which is indexed by two correlated random variables, age and usage and covers all failures in (0, t]. Two different warranty costs for the replacement of the failed product are considered, according to its usage being greater or less than a pre-specified level s > 0. A bivariate probability distribution function is applied to incorporate the correlation effect of the two variables. Analytical expressions of the probability density function of the total warranty cost and its expected value, the probability distribution functions of the number of the failed products with usage greater or less than s and their corresponding expected values and costs are derived. Limit results are also obtained. The results obtained are useful measures in establishing the compensation policy and the evaluation of its performance under the proposed warranty. Illustrative numerical examples of the expected cost for Paulson, Pareto and Beta Stacy bivariate distributions are presented and discussed. In particular for Paulson’s bivariate probability distribution, closed form expressions for the expected costs are obtained.  相似文献   

8.
Data sets in numerous areas of application can be modelled by symmetric bivariate nonnormal distributions. Estimation of parameters in such situations is considered when the mean and variance of one variable is a linear and a positive function of the other variable. This is typically true of bivariate t distribution. The resulting estimators are found to be remarkably efficient. Hypothesis testing procedures are developed and shown to be robust and powerful. Real life examples are given.  相似文献   

9.
Confidence intervals for the population variance and the difference in variances of two populations based on the ordinary t-statistics combined with the bootstrap method are suggested. Theoretical and practical aspects of the suggested techniques are presented, as well as their comparison with existing methods (methods based on Chi-square statistics and F-statistics). In addition, application of presented methods in domain of insurance property data set is described and analyzed. For data from exponential distribution confidence intervals, which are calculated using described methods (based on transformation of the t-statistics and bootstrap technique), give consistent and best coverage in comparison with other methods.  相似文献   

10.
Confidence intervals for the population variance and the difference in variances of two populations based on the ordinary t-statistics combined with the bootstrap method are suggested. Theoretical and practical aspects of the suggested techniques are presented, as well as their comparison with existing methods (methods based on Chi-square statistics and F-statistics). In addition, application of presented methods in domain of insurance property data set is described and analyzed. For data from exponential distribution confidence intervals, which are calculated using described methods (based on transformation of the t-statistics and bootstrap technique), give consistent and best coverage in comparison with other methods.  相似文献   

11.
Different cost models of allocation and rearrangement of a set {F1,F2Fn} of files are investigated in the paper. It is assumed that the distribution p1(t) of the reference string ξ(t) depends on the user's activity. For large values of n a limiting process is used and a continuous rearrangement model is introduced, in which the integral formulas can be handled simpler than those summation formulas in the discrete case. Some open problems could be solved by the help of this treatment. The connection with order statistical treatment is found and used for a simple user activity model. The formulas can be used to approximate the average head movement, in case of different distributions, in optimal deterministic file allocation problems. Deterministic and stochastic strategies of allocation and rearrangement are studied and compared.  相似文献   

12.
In this paper, we derive recurrence relations for cumulative distribution functions (cdf’s) of bivariate t and extended skew-t distributions. These recurrence relations are over ν (the degrees of freedom), and starting from the known results for ν=1 and ν=2, they will allow for the recursive evaluation of the distribution function for any other positive integral value of ν. Then, we consider a linear combination of order statistics from a bivariate t distribution with an arbitrary mean vector and show that its cdf is a mixture of cdf’s of the extended skew-t distributions. This mixture form, along with the explicit expressions of the cdf’s of the extended skew-t distributions, enables us to derive explicit expressions for the cdf of the linear combination for any positive integral value of ν.  相似文献   

13.
An approach, based on recent work by Stern [56], is described for obtaining the approximate transient behavior of both the M/M/1 and M(t)/M/1 queues, where the notation M(t) indicates an exponential arrival process with time-varying parameter λ(t). The basic technique employs an M/M/1K approximation to the M/M/1 queue to obtain a spectral representation of the time-dependent behavior for which the eigen values and eigenvectors are real.Following a general survey of transient analysis which has already been accomplished, Stern's M/M/1/K approximation technique is examined to determine how best to select a value for K which will yield both accurate and computationally efficient results. It is then shown how the approximation technique can be extended to analyze the M(t)/M/1 queue where we assume that the M(t) arrival process can be approximated by a discretely time-varying Poisson process.An approximate expression for the departure process of the M/M/1 queue is also proposed which implies that, for an M(t)/M/1 queue whose arrival process is discretely time-varying, the departure process can be approximated as discretely time-varying too (albeit with a different time-varying parameter).In all cases, the techniques and approximations are examined by comparison with exact analytic results, simulation or alternative discrete-time approaches.  相似文献   

14.
Two-sample experiments (paired or unpaired) are often used to analyze treatment effects in life and environmental sciences. Quantifying an effect can be achieved by estimating the difference in center of location between a treated and a control sample. In unpaired experiments, a shift in scale is also of interest. Non-normal data distributions can thereby impose a serious challenge for obtaining accurate confidence intervals for treatment effects. To study the effects of non-normality we analyzed robust and non-robust measures of treatment effects: differences of averages, medians, standard deviations, and normalized median absolute deviations in case of unpaired experiments, and average of differences and median of differences in case of paired experiments. A Monte Carlo study using bivariate lognormal distributions was carried out to evaluate coverage performances and lengths of four types of nonparametric bootstrap confidence intervals, namely normal, Student's t, percentile, and BCa for the estimated measures. The robust measures produced smaller coverage errors than their non-robust counterparts. On the other hand, the robust versions gave average confidence interval lengths approximately 1.5 times larger. In unpaired experiments, BCa confidence intervals performed best, while in paired experiments, Student's t was as good as BCa intervals. Monte Carlo results are discussed and recommendations on data sizes are presented. In an application to physiological source–sink manipulation experiments with sunflower, we quantify the effect of an increased or decreased source–sink ratio on the percentage of unfilled grains and the dry mass of a grain. In an application to laboratory experiments with wastewater, we quantify the disinfection effect of predatory microorganisms. The presented bootstrap method to compare two samples is broadly applicable to measured or modeled data from the entire range of environmental research and beyond.  相似文献   

15.
The bivariate distributions are useful in simultaneous modeling of two random variables. These distributions provide a way to model models. The bivariate families of distributions are not much widely explored and in this article a new family of bivariate distributions is proposed. The new family will extend the univariate transmuted family of distributions and will be helpful in modeling complex joint phenomenon. Statistical properties of the new family of distributions are explored which include marginal and conditional distributions, conditional moments, product and ratio moments, bivariate reliability and bivariate hazard rate functions. The maximum likelihood estimation (MLE) for parameters of the family is also carried out. The proposed bivariate family of distributions is studied for the Weibull baseline distributions giving rise to bivariate transmuted Weibull (BTW) distribution. The new bivariate transmuted Weibull distribution is explored in detail. Statistical properties of the new BTW distribution are studied which include the marginal and conditional distributions, product, ratio and conditional momenst. The hazard rate function of the BTW distribution is obtained. Parameter estimation of the BTW distribution is also done. Finally, real data application of the BTW distribution is given. It is observed that the proposed BTW distribution is a suitable fit for the data used.  相似文献   

16.
Motivated from the stochastic representation of the univariate zero-inflated Poisson (ZIP) random variable, the authors propose a multivariate ZIP distribution, called as Type I multivariate ZIP distribution, to model correlated multivariate count data with extra zeros. The distributional theory and associated properties are developed. Maximum likelihood estimates for parameters of interest are obtained by Fisher’s scoring algorithm and the expectation–maximization (EM) algorithm, respectively. Asymptotic and bootstrap confidence intervals of parameters are provided. Likelihood ratio test and score test are derived and are compared via simulation studies. Bayesian methods are also presented if prior information on parameters is available. Two real data sets are used to illustrate the proposed methods. Under both AIC and BIC, our analysis of the two data sets supports the Type I multivariate zero-inflated Poisson model as a much less complex alternative with feasibility to the existing multivariate ZIP models proposed by Li et al. (Technometrics, 29–38, Vol 41, 1999).  相似文献   

17.
We present a probabilistic model for robust factor analysis and principal component analysis in which the observation noise is modeled by Student-t distributions in order to reduce the negative effect of outliers. The Student-t distributions are modeled independently for each data dimensions, which is different from previous works using multivariate Student-t distributions. We compare methods using the proposed noise distribution, the multivariate Student-t and the Laplace distribution. Intractability of evaluating the posterior probability density is solved by using variational Bayesian approximation methods. We demonstrate that the assumed noise model can yield accurate reconstructions because corrupted elements of a bad quality sample can be reconstructed using the other elements of the same data vector. Experiments on an artificial dataset and a weather dataset show that the dimensional independency and the flexibility of the proposed Student-t noise model can make it superior in some applications.  相似文献   

18.
In this paper, we present new multivariate quantile distributions and utilise likelihood-free Bayesian algorithms for inferring the parameters. In particular, we apply a sequential Monte Carlo (SMC) algorithm that is adaptive in nature and requires very little tuning compared with other approximate Bayesian computation algorithms. Furthermore, we present a framework for the development of multivariate quantile distributions based on a copula. We consider bivariate and time series extensions of the g-and-k distribution under this framework, and develop an efficient component-wise updating scheme free of likelihood functions to be used within the SMC algorithm. In addition, we trial the set of octiles as summary statistics as well as functions of these that form robust measures of location, scale, skewness and kurtosis. We show that these modifications lead to reasonably precise inferences that are more closely comparable to computationally intensive likelihood-based inference. We apply the quantile distributions and algorithms to simulated data and an example involving daily exchange rate returns.  相似文献   

19.
目的 经典的聚类算法在处理高维数据时存在维数灾难等问题,使得计算成本大幅增加并且效果不佳。以自编码或变分自编码网络构建的聚类网络改善了聚类效果,但是自编码器提取的特征往往比较差,变分自编码器存在后验崩塌等问题,影响了聚类的结果。为此,本文提出了一种基于混合高斯变分自编码器的聚类网络。方法 使用混合高斯分布作为隐变量的先验分布构建变分自编码器,并以重建误差和隐变量先验与后验分布之间的KL散度(Kullback-Leibler divergence)构造自编码器的目标函数训练自编码网络;以训练获得的编码器对输入数据进行特征提取,结合聚类层构建聚类网络,以编码器隐层特征的软分配分布与软分配概率辅助目标分布之间的KL散度构建目标函数并训练聚类网络;变分自编码器采用卷积神经网络实现。结果 为了验证本文算法的有效性,在基准数据集MNIST(Modified National Institute of Standards and Technology Database)和Fashion-MNIST上评估了该网络的性能,聚类精度(accuracy,ACC)和标准互信息(normalized mutua...  相似文献   

20.
Abrupt shifts in the level of a time series represent important information and should be preserved in statistical signal extraction. Various rules for detecting level shifts that are resistant to outliers and which work with only a short time delay are investigated. The properties of robustified versions of the t-test for two independent samples and its non-parametric alternatives are elaborated under different types of noise. Trimmed t-tests, median comparisons, robustified rank and ANOVA tests based on robust scale estimators are compared.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号