首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is concerned with ANOVA-like tests in the context of mixed discrete and continuous data. The likelihood ratio approach is used to obtain a location test in the mixed data setting after specifying a general location model for the joint distribution of the mixed discrete and continuous variables. The approach allows the problem to be treated from a multivariate perspective to simultaneously test both the discrete and continuous parameters of the model, thus avoiding the problem of multiple significance testing. Moreover, associations among variables are accounted for, resulting in improved power performance of the test. Unlike existing distance-based alternatives which rely on asymptotic theory, the likelihood ratio test is exact. In addition, it can be viewed as an extension to the mixed data setting of the classical multivariate ANOVA. We compare its performance against those of currently available tests via Monte Carlo simulations. Two real-data examples are presented to illustrate the methodology.  相似文献   

2.
Inverse sampling suggests one continues to sample subjects until a pre-specified number of rare events of interest is observed. It is generally considered to be more appropriate than the usual binomial sampling when the subjects come sequentially, when the response probability is rare, and when maximum likelihood estimators of some epidemiological measures are undefined under binomial sampling. Reliable but conservative exact conditional procedure for the ratio of the response probabilities of subject without the attribute of interest has been studied. However, such a procedure is inapplicable to the risk ratio (i.e., ratio of the response probabilities of subject with the attribute of interest). In this paper, we investigate various test statistics (namely Wald-type, score and likelihood ratio test statistics) for testing non-unity risk ratio under standard inverse sampling scheme, which suggests one continue to sample until the predetermined number of index subjects with the attributes of interest is observed. Both asymptotic and numerical approximate unconditional methods are considered for P-value calculation. Performance of these test procedures are evaluated under different settings by means of Monte Carlo simulation. In general, the Wald-type test statistic is preferable for its satisfactory and stable performance with approximate unconditional procedures. The methodologies are illustrated with a real example from a heart disease study.  相似文献   

3.
The Wilks’ Lambda Statistic (likelihood ratio test, LRT) is a commonly used tool for inference about the mean vectors of several multivariate normal populations. However, it is well known that the Wilks’ Lambda statistic which is based on the classical normal theory estimates of generalized dispersions, is extremely sensitive to the influence of outliers. A robust multivariate statistic for the one-way MANOVA based on the Minimum Covariance Determinant (MCD) estimator will be presented. The classical Wilks’ Lambda statistic is modified into a robust one through substituting the classical estimates by the highly robust and efficient reweighted MCD estimates. Monte Carlo simulations are used to evaluate the performance of the test statistic under various distributions in terms of the simulated significance levels, its power functions and robustness. The power of the robust and classical statistics is compared using size-power curves, for the construction of which no knowledge about the distribution of the statistics is necessary. As a real data application the mean vectors of an ecogeochemical data set are examined.  相似文献   

4.
In this paper, we propose a new family of watermark detectors for additive watermarks in digital images. These detectors are based on a recently proposed hierarchical, two-level image model, which was found to be beneficial for image recovery problems. The top level of this model is defined to exploit the spatially varying local statistics of the image, while the bottom level is used to characterize the image variations along two principal directions. Based on this model, we derive a class of detectors for the additive watermark detection problem, which include a generalized likelihood ratio, Bayesian, and Rao test detectors. We also propose methods to estimate the necessary parameters for these detectors. Our numerical experiments demonstrate that these new detectors can lead to superior performance to several state-of-the-art detectors.  相似文献   

5.
首先在多分辨四叉树上定义了一个广义的多分辨似然比,刻画并且累积了SAR(synthetic aperture radar)图像中目标与背景在不同分辨率上的差异,从而增大了目标与背景之间的区分度。为了达到图像无监督分割目的,利用经典的混和模型方法分别估计出每个分辨率上对应的多分辨似然比中一组密度函数的参数。为了考虑被分类像素与周围像素之间的Markov性,减弱对噪声的敏感性,利用开窗内像素的广义多分辨似然值的和的大小来确定中心像素点的类别。实验中与通常的分割技术作了比较,也表明该方法不论从分割的精度,对噪声的敏感度,还是从边缘的光滑度都表明该方法都具有明显的效果。  相似文献   

6.
The generalized likelihood ratio (GLR) test is a widely used method for detecting abrupt changes in linear systems and signals. In this paper the marginalized likelihood ratio (MLR) test is introduced for eliminating three shortcomings of GLR while preserving its applicability and generality. First, the need for a user-chosen threshold is eliminated in MLR. Second, the noise levels need not be known exactly and may even change over time, which means that MLR is robust. Finally, a very efficient exact implementation with linear in time complexity for batch-wise data processing is developed. This should be compared to the quadratic in time complexity of the exact GLR  相似文献   

7.
In this paper, a new life test plan called a progressive first-failure-censoring scheme is introduced. Maximum likelihood estimates, exact and approximate confidence intervals and an exact confidence region for the parameters of the Weibull distribution are discussed for the new censoring scheme. A numerical example is provided to illustrate the proposed censoring scheme. Some simulation results are presented and used to assess the performance of the proposed estimation methods developed here. The expected time required to complete the proposed life test plan is derived. Finally, a numerical study for comparing among different censoring schemes in terms of expected test time is given.  相似文献   

8.
In this paper, several diagnostics measures are proposed based on case-deletion model for log-Birnbaum-Saunders regression models (LBSRM), which might be a necessary supplement of the recent work presented by Galea et al. [2004. Influence diagnostics in log-Birnbaum-Saunders regression models. J. Appl. Statist. 31, 1049-1064] who studied the influence diagnostics for LBSRM mainly based on the local influence analysis. It is shown that the case-deletion model is equivalent to the mean-shift outlier model in LBSRM and an outlier test is presented based on mean-shift outlier model. Furthermore, we investigate a test of homogeneity for shape parameter in LBSRM, which is a problem mentioned by both Rieck and Nedelman [1991. A log-linear model for the Birnbaum-Saunders distribution. Technometrics 33, 51-60] and Galea et al. [2004. Influence diagnostics in log-Birnbaum-Saunders regression models. J. Appl. Statist. 31, 1049-1064]. We obtain the likelihood ratio and score statistics for such test. Finally, a numerical example is given to illustrate our methodology and the properties of likelihood ratio and score statistics are investigated through Monte Carlo simulations.  相似文献   

9.
We develop nearly unbiased estimators for the two-parameter Birnbaum-Saunders distribution [Birnbaum, Z.W., Saunders, S.C., 1969a. A new family of life distributions. J. Appl. Probab. 6, 319-327], which is commonly used in reliability studies. We derive modified maximum likelihood estimators that are bias-free to second order. We also consider bootstrap-based bias correction. The numerical evidence we present favors three bias-adjusted estimators. Different interval estimation strategies are evaluated. Additionally, we derive a Bartlett correction that improves the finite-sample performance of the likelihood ratio test in finite samples.  相似文献   

10.
We investigate the theoretical properties of new instruments-based test statistics recently proposed [3] for detection and diagnosis of changes in the AR part of a multivariable ARMA process. The design flexibilities are analyzed, and the optimum design of the test is exhibited. The connection with the accuracy of the I.V. identification method [14] is established, and the comparison with the local likelihood ratio tests is done. These tests have been developed as a solution to the problem of vibration monitoring for offshore platforms.  相似文献   

11.
The main purpose of this work is to study the behaviour of Skovgaard’s [Skovgaard, I.M., 2001. Likelihood asymptotics. Scandinavian Journal of Statistics 28, 3–32] adjusted likelihood ratio statistic in testing simple hypothesis in a new class of regression models proposed here. The proposed class of regression models considers Dirichlet distributed observations, and the parameters that index the Dirichlet distributions are related to covariates and unknown regression coefficients. This class is useful for modelling data consisting of multivariate positive observations summing to one and generalizes the beta regression model described in Vasconcellos and Cribari-Neto [Vasconcellos, K.L.P., Cribari-Neto, F., 2005. Improved maximum likelihood estimation in a new class of beta regression models. Brazilian Journal of Probability and Statistics 19, 13–31]. We show that, for our model, Skovgaard’s adjusted likelihood ratio statistics have a simple compact form that can be easily implemented in standard statistical software. The adjusted statistic is approximately chi-squared distributed with a high degree of accuracy. Some numerical simulations show that the modified test is more reliable in finite samples than the usual likelihood ratio procedure. An empirical application is also presented and discussed.  相似文献   

12.
In this paper we propose a new lifetime distribution which can handle bathtub-shaped, unimodal, increasing and decreasing hazard rate functions. The model has three parameters and generalizes the exponential power distribution proposed by Smith and Bain (1975) with the inclusion of an additional shape parameter. The maximum likelihood estimation procedure is discussed. A small-scale simulation study examines the performance of the likelihood ratio statistics under small and moderate sized samples. Three real datasets illustrate the methodology.  相似文献   

13.
This paper presents a new and systematic method of approximating exact nonlinear filters with finite dimensional filters, using the differential geometric approach to statistics. The projection filter is defined rigorously in the case of exponential families. A convenient exponential family is proposed which allows one to simplify the projection filter equation and to define an a posteriori measure of the local error of the projection filter approximation. Finally, simulation results are discussed for the cubic sensor problem  相似文献   

14.
Resampling methods such as the bootstrap are routinely used to estimate the finite-sample null distributions of a range of test statistics. We present a simple and tractable way to perform classical hypothesis tests based upon a kernel estimate of the CDF of the bootstrap statistics. This approach has a number of appealing features: (i) it can perform well when the number of bootstraps is extremely small, (ii) it is approximately exact, and (iii) it can yield substantial power gains relative to the conventional approach. The proposed approach is likely to be useful when the statistic being bootstrapped is computationally expensive.  相似文献   

15.
The properties of a new nonparametric goodness of fit test are explored. It is based on a likelihood ratio test, applied via a consistent series density estimator in the exponential family. The focus is on its computational and numerical properties. Specifically it is found that the choice of approximating basis is not crucial and that the choice of model dimension, through data-driven selection criteria, yields a feasible, parsimonious procedure. Numerical experiments show that the new tests have significantly more power than established tests, whether based upon the empirical distribution function, or alternate density estimators.  相似文献   

16.
This paper presents a new test to distinguish between meaningful and non-meaningful HMM-modeled activity patterns in human activity recognition systems. Operating as a hypothesis test, alternative models are generated from available classes and the decision is based on a likelihood ratio test (LRT). The proposed test differs from traditional LRTs in two aspects. Firstly, the likelihood ratio, which is called pairwise likelihood ratio (PLR), is based on each pair of HMMs. Models for non-meaningful patterns are not required. Secondly, the distribution of the likelihood ratios, rather than a fixed threshold, is used as the measurement. Multiple measurements from multiple PLR tests are combined to improve the rejection accuracy. The advantage of the proposed test is that the establishment of such a test relies only on the meaningful samples.  相似文献   

17.
A graphical test statistic is proposed for testing the goodness of fit in the case of the multinomial distribution. The exact power of the test statistic is calculated and the comparison between the test statistic and the traditional test statistics is discussed with respect to the same example.  相似文献   

18.
The classical sign test is proper for hypotheses about a specified success probability, when based on independent trials. For such a hypothesis we introduce a new exact test that is appropriate with clustered binary data. It combines a permutation approach and an exact parametric bootstrap calculation. Simulation studies show it to be superior to a sign test based on aggregated cluster level data. The new test is more powerful than or comparable to a standard permutation test whenever (1) the number of clusters is small or (2) for larger cluster numbers under strong clustering. The results from a chemical repellency trial are used to illustrate three legitimate test methods.  相似文献   

19.
The likelihood ratio spatial scan statistic has been widely used in spatial disease surveillance and spatial cluster detection applications. In order to better understand cluster mechanisms, an equivalent model-based approach is proposed to the spatial scan statistic that unifies currently loosely coupled methods for including ecological covariates in the spatial scan test. In addition, the utility of the model-based approach with a Wald-based scan statistic is demonstrated to account for overdispersion and heterogeneity in background rates. Simulation and case studies show that both the likelihood ratio-based and Wald-based scan statistics are comparable with the original spatial scan statistic.  相似文献   

20.
以《茶经》的翻译为例,基于树剪枝理论提出了一种典籍文本快速切分方法。首先,采用似然比统计量计算两字、三字甚至多字候选单元;然后在此基础上基于树剪枝的思想构建了典籍文本快速切分的模型算法,并构建了基本流程图;最后,以《茶经》为例验证了本算法的有效性和合理性。理论分析和算例表明,该算法能有效地对典籍文本进行自动切分,并简化了计算时间的复杂度,在推广中国典籍的对外传译方面具有良好的应用前景。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号