首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
A new efficient method is proposed to compute multivariate normal probabilities over rectangles in high dimensions. The method exploits four variance reduction techniques: conditional Monte Carlo, importance sampling, splitting and control variates. Simulation results are presented that evaluate the performance of the new proposed method. The new method is designed for computing small exceedance probabilities.  相似文献   

3.
The properties of robustness of the estimates based on the minimum integrated square error criterion can be exploited to set up a procedure in finding the number of the components of a mixture of Gaussian distributions. Each step of the procedure consists in the comparison between the estimates, according to maximum likelihood and minimum integrated square error criteria, of a mixture with a fixed number of components. The discrepancy between the two estimated densities is measured applying the concept of similarity between densities following from the Cauchy-Schwarz inequality. A test of statistical hypothesis, based on Monte Carlo significance test, is introduced to verify the similarity between the two estimates. If their similarity is rejected, the model can be changed simply adding one more component to the mixture. Numerical examples are given and main results, arising from a simulation study carried out to check the power of the procedure featuring several experimental scenarios, are provided.  相似文献   

4.
A fast algorithm for calculating the simplicial depth of a single parameter vector of a polynomial regression model is derived. Additionally, an algorithm for calculating the parameter vectors with maximum simplicial depth within an affine subspace of the parameter space or a polyhedron is presented. Since the maximum simplicial depth estimator is not unique, l1 and l2 methods are used to make the estimator unique. This estimator is compared with other estimators in examples of linear and quadratic regression. Furthermore, it is shown how the maximum simplicial depth can be used to derive distribution-free asymptotic α-level tests for testing hypotheses in polynomial regression models. The tests are applied on a problem of shape analysis where it is tested how the relative head length of the fish species Lepomis gibbosus depends on the size of these fishes. It is also tested whether the dependency can be described by the same polynomial regression function within different populations.  相似文献   

5.
The problem of providing efficient and reliable robust regression algorithms is considered. The impact of global optimization methods, such as stopping conditions and clustering techniques, in the calculation of robust regression estimators is investigated. The use of stopping conditions permits us to devise new algorithms that perform as well as the existing algorithms in less time and with adaptive algorithm parameters. Clustering global optimization is shown to be a general framework encompassing many of the existing algorithms.  相似文献   

6.
7.
A particular parametric model, based on the counting process theory, and aimed at the analysis of recurrent events is explored. The model is built in the context of reliability of repairable systems and is used to analyze failures of water distribution pipes. The proposed model accounts for aging of systems, for harmful effects of events on the state of systems, and for covariates, both fixed and varying in time. The parameters assessing the aging and the effects of fixed covariates are largely explored in the literature on recurrent events modeling and are considered as typical parameters, whereas the parameters assessing the harmful effects of events on the state of systems and the effects of time-dependent covariates are considered to be original and model-specific. The general usability of the model is empirically assessed in terms of normality and unbiasedness of the Maximum Likelihood Estimator (MLE) of model parameters. The results of a Monte Carlo study for the MLE are presented. The asymptotic behavior of the MLE is explored according to two asymptotic directions: the number of individuals under observation and the duration of the observation. Other possible scales, combining these two directions and governing the asymptotic behavior of the MLE, are also explored. The empirically stated asymptotic properties of the MLE are partially consistent with the theoretical results presented in the literature for typical model parameters. The model-specific parameters present specific trends in asymptotic behavior. The empirical results suggest that the number of observed events can uniquely govern the asymptotic behavior of typical parameters. Model-specific parameters may additionally depend on other criteria.  相似文献   

8.
For a linear multilevel model with 2 levels, with equal numbers of level-1 units per level-2 unit and a random intercept only, different empirical Bayes estimators of the random intercept are examined. Studied are the classical empirical Bayes estimator, the Morris version of the empirical Bayes estimator and Rao's estimator. It is unclear which of these estimators performs best in terms of Bayes risk. Of these three, the Rao estimator is optimal in case the covariance matrix of random coefficients may be negative definite. However, in the multilevel model this matrix is restricted to be positive semi-definite. The Morris version, replaces the weights of the empirical Bayes estimator by unbiased estimates. This correction, however, is based on known level-1 variances, which in many empirical settings are unknown. A fourth estimator is proposed, a variant of Rao's estimator which restricts the estimated covariance matrix of random coefficients to be positive semi-definite. Since there are no closed-form expressions for estimators involved in the empirical Bayes estimators (except for the Rao estimator), Monte Carlo simulations are done to evaluate the performance of these different empirical Bayes estimators. Only for small sample sizes there are clear differences between these estimators. As a consequence, for larger sample sizes the formula for the Bayes risk of the Rao estimator can be used to calculate the Bayes risk for the other estimators proposed.  相似文献   

9.
As health care costs increased significantly in the 1990s, investments in information technology (IT) in the health care industry have also increased continuously in order to improve the quality of patient care and to respond to government pressure to reduce costs. Several studies have investigated the impact of IT on productivity with mixed conclusions. In this paper, we revisit this issue and re-examine the impact of investments in IT on hospital productivity using two data mining techniques, which allowed us to explore interactions between the input variables as well as conditional impacts. The results of our study indicated that the relationship between IT investment and productivity is very complex. We found that the impact of IT investment is not uniform and the rate of IT impact varies contingent on the amounts invested in the IT Stock, Non-IT Labor, Non-IT Capital, and possibly time.  相似文献   

10.
In the multivariate one-way analysis of variance a test statistic based solely on the rank orders of the data is proposed. In the two group case the statistic simplifies to a test of Puri and Sen [19]. Monte Carlo simulation techniques are used to evaluate the performance of the test statistic under various distributions. These evaluation include the simulated significance levels and power functions. By the nature of the proposed test it is evident that the new procedure is easier to implement than other existing rank based tests.  相似文献   

11.
This paper develops a Bayesian analysis in the context of record statistics values from the two-parameter Weibull distribution. The ML and the Bayes estimates based on record values are derived for the two unknown parameters and some survival time parameters e.g. reliability and hazard functions. The Bayes estimates are obtained based on a conjugate prior for the scale parameter and a discrete prior for the shape parameter of this model. This is done with respect to both symmetric loss function (squared error loss), and asymmetric loss function (linear-exponential (LINEX)) loss function. The maximum likelihood and the different Bayes estimates are compared via a Monte Carlo simulation study. A practical example consisting of real record values using the data from an accelerated test on insulating fluid reported by Nelson was used for illustration and comparison. Finally, Bayesian predictive density function, which is necessary to obtain bounds for predictive interval of future record is derived and discussed using a numerical example. The results may be of interest in a situation where only record values are stored.  相似文献   

12.
介绍了Monte Carlo方法,提出其在模拟Buffer问题时存在的一个问题,并给出改进的方法;提出了用Monte Carlo方法产生任意分布随机变量的原理及方法,并对Beta分布和标准正态分布随机变量进行了计算机模拟和效果检验。  相似文献   

13.
基于鲁棒估计的动态数据校正方法   总被引:2,自引:2,他引:2  
针对目前动态数据校正方法存在的缺陷,本文在以前的工作基础上把基于鲁棒估计原理构造的鲁棒估计函数用于含异常点类型的过失误差的数据校正,Monte Carlo模拟结果及对TE问题的校正计算结果表明,这种基于鲁棒估计的过换误差侦破和数据协调同步方法可以在得到协调数据的同时很准确的侦破和识别出测量数据中所含的过失误差,具有较强的优越性。  相似文献   

14.
15.
Two algorithms, and corresponding Fortran computer programs, for the computation of posterior moments and densities using the principle of importance sampling are described in detail. The first algorithm makes use of a multivariate Student t importance function as approximation of the posterior. It can be applied when the integrand is moderately skew. The second algorithm makes use of a decomposition: a multivariate normal importance function is used to generate directions (lines) and one-dimensional classical quadrature is used to evaluate the integrals defined on the generated lines. The second algorithm can be used in cases where the integrand is possibly very skew in any direction.  相似文献   

16.
This article addresses orness measures to reflect the or-like degree of the Bonferroni mean (BM) and its variants. Some properties of these operators associated with their orness measures are portrayed analytically. However, the general orness measure involves the multiple integrals with the integral fold number being the number of the aggregated elements and as a result, the computation becomes complicated when the number of the aggregated elements is large. Furthermore, the analytical formula of the orness measure often cannot be obtained. For this reason, this study concentrates on Monte Carlo simulation to validate the result. We estimate the two parameters of the BM for a predefined orness value and a fixed length of the input vector. Besides the theoretical study of orness measure related to BM and its variants, the article also explores the simulation-based results. To support this, we provide four numerical examples.  相似文献   

17.
The bootstrap method is a computer intensive statistical method that is widely used in performing nonparametric inference. Categorical data analysis, in particular the analysis of contingency tables, is commonly used in applied field. This work considers nonparametric bootstrap tests for the analysis of contingency tables. There are only a few research papers which exploit this field. The p-values of tests in contingency tables are discrete and should be uniformly distributed under the null hypothesis. The results of this article show that corresponding bootstrap versions work better than the standard tests. Properties of the proposed tests are illustrated and discussed using Monte Carlo simulations. This article concludes with an analytical example that examines the performance of the proposed tests and the confidence interval of the association coefficient.  相似文献   

18.
Maximum likelihood estimation has a rich history. It has been successfully applied to many problems including dynamical system identification. Different approaches have been proposed in the time and frequency domains. In this paper we discuss the relationship between these approaches and we establish conditions under which the different formulations are equivalent for finite length data. A key point in this context is how initial (and final) conditions are considered and how they are introduced in the likelihood function.  相似文献   

19.
20.
Many of the state-of-the-art classification algorithms for data with linearly ordered attribute domains and a linearly ordered label set insist on the monotonicity of the induced classification rule. Training and evaluation of such algorithms requires the availability of sufficiently general monotone data sets. In this short contribution we introduce an algorithm that allows for the (almost) uniform random generation of monotone data sets based on the Markov Chain Monte Carlo method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号