首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
An omnibus test for testing a generalized version of the martingale difference hypothesis (MDH) is proposed. This generalized hypothesis includes the usual MDH, testing for conditional moments constancy such as conditional homoscedasticity (ARCH effects) or testing for directional predictability. A unified approach for dealing with all of these testing problems is proposed. These hypotheses are long standing problems in econometric time series analysis, and typically have been tested using the sample autocorrelations or in the spectral domain using the periodogram. Since these hypotheses cover also nonlinear predictability, tests based on those second order statistics are inconsistent against uncorrelated processes in the alternative hypothesis. In order to circumvent this problem pairwise integrated regression functions are introduced as measures of linear and nonlinear dependence. The proposed test does not require to chose a lag order depending on sample size, to smooth the data or to formulate a parametric alternative model. Moreover, the test is robust to higher order dependence, in particular to conditional heteroskedasticity. Under general dependence the asymptotic null distribution depends on the data generating process, so a bootstrap procedure is considered and a Monte Carlo study examines its finite sample performance. Then, the martingale and conditional heteroskedasticity properties of the Pound/Dollar exchange rate are investigated.  相似文献   

2.
The robustness against strongly non-linear forms for the conditional variance of tests for detecting conditional heteroskedasticity using both artificial neural network techniques and bootstrap methods combined, is analysed in the context of ARCH-M models. The size and the power properties in small samples of these tests are examined by using out Monte-Carlo experiments with various standard and non-standard models of conditional heteroskedasticity. The P value functions are explored in order to select particularly problematic cases. Graphical presentations, based on the principle of size correction, are used for presenting the true power of the tests, rather than a spurious nominal power as it is usually made in the literature. In addition, graphics linking the process dynamics with the heteroskedasticity forms are shown for analysing in which circumstances the neural networks are effective.  相似文献   

3.
A panel data regression model with heteroskedastic as well as spatially correlated disturbances is considered, and a joint LM test for homoskedasticity and no spatial correlation is derived. In addition, a conditional LM test for no spatial correlation given heteroskedasticity, as well as a conditional LM test for homoskedasticity given spatial correlation, are also derived. These LM tests are compared with marginal LM tests that ignore heteroskedasticity in testing for spatial correlation, or spatial correlation in testing for homoskedasticity. Monte Carlo results show that these LM tests, as well as their LR counterparts, perform well, even for small N and T. However, misleading inferences can occur when using marginal, rather than joint or conditional LM tests when spatial correlation or heteroskedasticity is present.  相似文献   

4.
A robust sign test is proposed for testing unit roots in cross-sectionally dependent panel data. Large sample Gaussian null asymptotics of the test are established under (fixed N, large T) and, for serially uncorrelated error cases, under (large N, fixed T), where N is the number of panel units and T is the length of time span. The limiting null distribution is valid, even if the error processes are subject to any type of conditional heteroscedasticity. A Monte-Carlo experiment reveals that, compared with other existing tests, the proposed test has a very stable size property for wider classes of error distributions, type of conditional heteroscedasticities, type of cross-sectional correlations, and values of (N,T) while having reasonable power. Especially, for small T like T=5,10,20, the proposed test shows much stabler size performance than other existing tests. The unemployment rates of the 51 states of the USA are analyzed by the proposed method, which reveals some evidence for unit roots in the presence of factor and spatial cross-section correlation.  相似文献   

5.
This paper uses Monte Carlo simulations to examine the properties of the conventional Pearson and some of the most well-known robust to outliers estimators of correlation in the presence of general heteroskedasticity. We show that the tests of a random walk based on the Pearson autocorrelation coefficient, including the Lo and MacKinlay [1988. Stock market prices do not follow random walks: evidence from a simple specification test. Rev. Financial Studies 1, 41-66] robust form of the variance-ratio test, can be unreliable in the presence of some forms of conditional heteroskedasticity. As an alternative to the Pearson autocorrelation coefficient, we propose the median coefficient of autocorrelation. Our simulation results show that, in contrast to the Pearson autocorrelation coefficient, the median coefficient of autocorrelation is robust to conditional heteroskedasticity. When applied to exchange rate returns, the variance-ratio test based on the median autocorrelation coefficient provides stronger evidence against the random walk hypothesis compared with the Lo and MacKinlay [1988. Stock market prices do not follow random walks: evidence from a simple specification test. Rev. Financial Studies 1, 41-66] robust variance-ratio test.  相似文献   

6.
It is well known that in the context of the classical regression model with heteroskedastic errors, while ordinary least squares (OLS) is not efficient, the weighted least squares (WLS) and quasi-maximum likelihood (QML) estimators that utilize the information contained in the heteroskedasticity are. In the context of unit root testing with conditional heteroskedasticity, while intuition suggests that a similar result should apply, the relative performance of the tests associated with the OLS, WLS and QML estimators is not well understood. In particular, while QML has been shown to be able to generate more powerful tests than OLS, not much is known regarding the relative performance of the WLS-based test. By providing an in-depth comparison of the tests, the current paper fills this gap in the literature.  相似文献   

7.
An approximate F-form of the Lagrange multiplier (LM) test for serial correlation in dynamic regression models is compared with three bootstrap tests. In one bootstrap procedure, residuals from restricted estimation under the null hypothesis are resampled. The other two bootstrap tests use residuals from unrestricted estimation under an alternative hypothesis. A fixed autocorrelation alternative is assumed in one of the two unrestricted bootstrap tests and the other is based upon a Pitman-type sequence of local alternatives. Monte Carlo experiments are used to estimate rejection probabilities under the null hypothesis and in the presence of serial correlation.  相似文献   

8.
F. Famoye 《Computing》1998,61(4):359-369
Goodness of fit test statistics based on the empirical distribution function (EDF) are considered for the generalized negative binomial distribution. The small sample levels of the tests are found to be very close to the nominal significance levels. For small sample sizes, the tests are compared with respect to their simulated power of detecting some alternative hypotheses against a null hypothesis of generalized negative binomial distribution. The discrete Anderson—Darling test is the most powerful among the EDF tests. Two numerical examples are used to illustrate the application of the goodness of fit tests. The support received from the Research Professorship Program at Central Michigan University under the grant #22159 is gratefully acknowledged.  相似文献   

9.
Several tests for a zero random effect variance in linear mixed models are compared. This testing problem is non-regular because the tested parameter is on the boundary of the parameter space. Size and power of the different tests are investigated in an extensive simulation study that covers a variety of important settings. These include testing for polynomial regression versus a general smooth alternative using penalized splines. Among the test procedures considered, three are based on the restricted likelihood ratio test statistic (RLRT), while six are different extensions of the linear model F-test to the linear mixed model. Four of the tests with unknown null distributions are based on a parametric bootstrap, the other tests rely on approximate or asymptotic distributions. The parametric bootstrap-based tests all have a similar performance. Tests based on approximate F-distributions are usually the least powerful among the tests under consideration. The chi-square mixture approximation for the RLRT is confirmed to be conservative, with corresponding loss in power. A recently developed approximation to the distribution of the RLRT is identified as a rapid, powerful and reliable alternative to computationally intensive parametric bootstrap procedures. This novel method extends the exact distribution available for models with one random effect to models with several random effects.  相似文献   

10.
The ANOVA method and permutation tests, two heritages of Fisher, have been extensively studied. Several permutation strategies have been proposed by others to obtain a distribution-free test for factors in a fixed effect ANOVA (i.e., single error term ANOVA). The resulting tests are either approximate or exact. However, there exists no universal exact permutation test which can be applied to an arbitrary design to test a desired factor. An exact permutation strategy applicable to fixed effect analysis of variance is presented. The proposed method can be used to test any factor, even in the presence of higher-order interactions. In addition, the method has the advantage of being applicable in unbalanced designs (all-cell-filled), which is a very common situation in practice, and it is the first method with this capability. Simulation studies show that the proposed method has an actual level which stays remarkably close to the nominal level, and its power is always competitive. This is the case even with very small datasets, strongly unbalanced designs and non-Gaussian errors. No other competitor show such an enviable behavior.  相似文献   

11.
The ease of entering a vehicle, known as ingress, is one of the important ergonomic factors that car manufacturers consider during the process of vehicle design. Manufacturers frequently conduct human subject tests to assess ingress discomfort for different vehicle designs. Using subject tests, manufacturers are able to estimate the proportion of participants that report that they are discomfortable entering a vehicle, referred to in this paper as fraction disaccommodated (FD). Manufacturers then conduct statistical tests in order to determine if the FD of two vehicle designs are significantly different, and to determine the required sample size in testing the FD difference between two vehicle designs under pre-specified testing power. Since conducting human subject tests is often expensive and time consuming, another alternative is to estimate the FD using simulated human motion data. Determining the number of simulations that is required is an important statistical question that is dependent on the prediction performance of the simulation analysis. In this paper, a dual bootstrap approach is proposed to obtain the standard deviation of the estimated FD based on functional predictors. This standard deviation is then used to calculate the power in testing the difference between two estimated FDs.  相似文献   

12.
In the statistics literature, a number of procedures have been proposed for testing equality of several groups’ covariance matrices when data are complete, but this problem has not been considered for incomplete data in a general setting. This paper proposes statistical tests for equality of covariance matrices when data are missing. A Wald test (denoted by T1), a likelihood ratio test (LRT) (denoted by R), based on the assumption of normal populations are developed. It is well-known that for the complete data case the classic LRT and the Wald test constructed under the normality assumption perform poorly in instances when data are not from multivariate normal distributions. As expected, this is also the case for the incomplete data case and therefore has led us to construct a robust Wald test (denoted by T2) that performs well for both normal and non-normal data. A re-scaled LRT (denoted by R*) is also proposed. A simulation study is carried out to assess the performance of T1, T2, R, and R* in terms of closeness of their observed significance level to the nominal significance level as well as the power of these tests. It is found that T2 performs very well for both normal and non-normal data in both small and large samples. In addition to its usual applications, we have discussed the application of the proposed tests in testing whether a set of data are missing completely at random (MCAR).  相似文献   

13.
In recent research [B. Seo, Distribution theory for unit root tests with conditional heteroskedasticity, J. Econometrics 91 (1999) 113–144] has suggested that the examination of the unit root hypothesis in series exhibiting GARCH behaviour should proceed via joint maximum likelihood (ML) estimation of the unit root testing equation and GARCH process. The results presented show the asymptotic distribution of the resulting ML t-test to be a mixture of the Dickey–Fuller and standard normal distributions. In this paper, the relevance of these asymptotic arguments is considered for the finite samples encountered in empirical research. In particular, the influences of sample size, alternative values of the parameters of the GARCH process and the use of the Bollerslev–Wooldridge covariance matrix estimator upon the finite-sample distribution of the ML t-statistic are explored. It is shown that the resulting critical values for the ML t-statistic are similar to those of the Dickey–Fuller distribution rather than the standard normal, unless a large sample size and empirically unrealistic values of the volatility parameter of the GARCH process are considered. Use of the Bollerslev–Wooldridge standard covariance matrix estimator exaggerates this finding, causing a leftward shift in the finite-sample distribution of the ML t-statistic. The results of the simulation analysis are illustrated via an application to U.S. short term interest rates.  相似文献   

14.
The construction of bootstrap hypothesis tests can differ from that of bootstrap confidence intervals because of the need to generate the bootstrap distribution of test statistics under a specific null hypothesis. Similarly, bootstrap power calculations rely on resampling being carried out under specific alternatives. We describe and develop null and alternative resampling schemes for common scenarios, constructing bootstrap tests for the correlation coefficient, variance, and regression/ANOVA models. Bootstrap power calculations for these scenarios are described. In some cases, null-resampling bootstrap tests are equivalent to tests based on appropriately constructed bootstrap confidence intervals. In other cases, particularly those for which simple percentile-method bootstrap intervals are in routine use such as the correlation coefficient, null-resampling tests differ from interval-based tests. We critically assess the performance of bootstrap tests, examining size and power properties of the tests numerically using both real and simulated data. Where they differ from tests based on bootstrap confidence intervals, null-resampling tests have reasonable size properties, outperforming tests based on bootstrapping without regard to the null hypothesis. The bootstrap tests also have reasonable power properties.  相似文献   

15.
Robust panel unit root tests are developed for cross-sectionally dependent multiple time series. The tests have limiting null distributions derived from standard normal distributions. A Monte Carlo experiment shows that the tests have better finite sample robust performance than existing tests. Some Latin American real exchange rates revealing many outlying observations are analyzed to check the purchasing power parity (PPP) theory.  相似文献   

16.
In this paper, we define the spatial bootstrap test as a residual-based bootstrap method for hypothesis testing of spatial dependence in a linear regression model. Based on Moran’s I statistic, the empirical size and power of bootstrap and asymptotic tests for spatial dependence are evaluated and compared. Under classical normality assumption of the model, the performance of the spatial bootstrap test is equivalent to that of the asymptotic test in terms of size and power. For more realistic heterogeneous non-normal distributional models, the applicability of asymptotic normal tests is questionable. Instead, spatial bootstrap tests have shown superiority in smaller size distortion and higher power when compared to asymptotic counterparts, especially for cases with a small sample and dense spatial contiguity. Our Monte Carlo experiments indicate that the spatial bootstrap test is an effective alternative to the theoretical asymptotic approach when the classical distributional assumption is violated.  相似文献   

17.
We introduce the concept of a representative value function in robust ordinal regression applied to multiple criteria sorting problems. The proposed approach can be seen as an extension of UTADISGMS, a new multiple criteria sorting method that aims at assigning actions to p pre-defined and ordered classes. The preference information supplied by the decision maker (DM) is composed of desired assignments of some reference actions to one or several contiguous classes—they are called assignment examples. The robust ordinal regression builds a set of general additive value functions compatible with the assignment examples and results in two assignments: necessary and possible. The necessary assignment specifies the range of classes to which the action can be assigned considering all compatible value functions simultaneously. The possible assignment specifies, in turn, the range of classes to which the action can be assigned considering any compatible value function individually. In this paper, we propose a way of selecting a representative value function among the set of compatible ones. We identify a few targets which build on results of the robust ordinal regression and could be attained by a representative value function. They concern enhancement of differences between possible assignments of two actions. In this way, the selected function highlights the most stable part of the robust sorting, and can be perceived as representative in the sense of robustness preoccupation. We envisage two possible uses of the representative value function in decision support systems. The first one is an explicit exhibition of the function along with the results of the UTADISGMS method, in order to help the DM to understand the robust sorting. The other is an autonomous use, in order to supply the DM with sorting obtained by an example-based procedure driven by the chosen function. Three case studies illustrating the use of a representative value function in real-world decision problems are presented. One of those studies is devoted to the comparison of the introduced concept of representativeness with alternative procedures for determining a single value function, which we adapted to sorting problems, because they were originally proposed for ranking problems.  相似文献   

18.
A fuzzy regression model is developed to construct the relationship between the response and explanatory variables in fuzzy environments. To enhance explanatory power and take into account the uncertainty of the formulated model and parameters, a new operator, called the fuzzy product core (FPC), is proposed for the formulation processes to establish fuzzy regression models with fuzzy parameters using fuzzy observations that include fuzzy response and explanatory variables. In addition, the sign of parameters can be determined in the model-building processes. Compared to existing approaches, the proposed approach reduces the amount of unnecessary or unimportant information arising from fuzzy observations and determines the sign of parameters in the models to increase model performance. This improves the weakness of the relevant approaches in which the parameters in the models are fuzzy and must be predetermined in the formulation processes. The proposed approach outperforms existing models in terms of distance, mean similarity, and credibility measures, even when crisp explanatory variables are used.  相似文献   

19.
The asymptotic and exact conditional methods are widely used to compare two ordered multinomials. The asymptotic method is well known for its good performance when the sample size is sufficiently large. However, Brown et al. (2001) gave a contrary example in which this method performed liberally even when the sample size was large. In practice, when the sample size is moderate, the exact conditional method is a good alternative, but it is often criticised for its conservativeness. Exact unconditional methods are less conservative, but their computational burden usually renders them infeasible in practical applications. To address these issues, we develop an approximate unconditional method in this paper. Its computational burden is successfully alleviated by using an algorithm that is based on polynomial multiplication. Moreover, the proposed method not only corrects the conservativeness of the exact conditional method, but also produces a satisfactory type I error rate. We demonstrate the practicality and applicability of this proposed procedure with two real examples, and simulation studies are conducted to assess its performance. The results of these simulation studies suggest that the proposed procedure outperforms the existing procedures in terms of the type I error rate and power, and is a reliable and attractive method for comparing two ordered multinomials.  相似文献   

20.
Nonparametric procedures are proposed for testing exponentiality against several new aging classes of life distributions. The main idea is to test if the lifetime of a system, whose failure time has occurred in between visits, belongs to some alternative new aging classes of life distribution. This knowledge enables us to better estimate premium amounts for clients by more accurately estimating value when encountering left or interval-censored data. One possible approach to perform these tests is based on the empirical distribution function. The limiting distributions of the presented test statistics are given for a well known alternative when the null distribution is exponential. For small sample sizes, the power of the tests are calculated. The results derived by a Monte Carlo method show an excellent power of these procedures for some common alternative distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号