首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
We propose a generalization of the Dynamic Conditional Correlation multivariate GARCH model of Engle [R.F. Engle, Dynamic conditional correlation: a simple class of multivariate generalized autoregressive conditional heteroskedasticity models, Journal of Business and Economic Statistics 20 (2002) 339–350] and of the Asymmetric Dynamic Conditional Correlation model of Cappiello et al.[L. Cappiello, R.F. Engle, K. Sheppard, Asymmetric dynamics in the correlations of global equity and bond returns, Journal of Financial Econometrics 25 (2006) 537–572]. The model we propose introduces a block structure in parameter matrices that allows for interdependence with a reduced number of parameters. Our model nests the Flexible Dynamic Conditional Correlation model of Billio et al. [M. Billio, M. Caporin, M. Gobbo, Flexible dynamic conditional correlation multivariate GARCH for asset allocation, Applied Financial Economics Letters 2 (2006) 123–130] and is named Quadratic Flexible Dynamic Conditional Correlation Multivariate GARCH. In the paper, we provide conditions for positive definiteness of the conditional correlations. We also present an empirical application to the Italian stock market comparing alternative correlation models for portfolio risk evaluation.  相似文献   

2.
The paper forecasts conditional correlations between three classes of international financial assets, namely stock, bond and foreign exchange. Two countries are considered, namely Australia and New Zealand. Forecasting will be conducted using three multivariate GARCH models, namely the CCC model [T. Bollerslev, Modelling the coherence in short-run nominal exchange rates: a multivariate generalized ARCH model, Rev. Econ. Stat. 72 (1990) 498–505], VARMA-GARCH model [S. Ling, M. McAleer, Asymptotic theory for a vector ARMA-GARCH model, Econometric Theory 19 (2003) 280–310], and VARMA-AGARCH model [M. McAleer, S. Hoti, F. Chan, Structure and asymptotic theory for multivariate asymmetric volatility, Econometric Rev., in press]. A rolling window technique is used to forecast 1-day ahead conditional correlations. To evaluate the impact of model specification on conditional correlations forecasts, this paper calculates and compares the correlations between conditional correlations forecasts resulted from the three models. The paper finds the evidence of volatility spillovers and asymmetric effect of negative and positive shock on the conditional variance in most pairs of series. However, it suggests that incorporating volatility spillovers and asymmetric do not contribute to better conditional correlations forecasts.  相似文献   

3.
In this paper we introduce the Birnbaum–Saunders autoregressive conditional duration (BS-ACD) model as an alternative to the existing ACD models which allow a unimodal hazard function. The BS-ACD model is the first ACD model to integrate the concept of conditional quantile estimation into an ACD model by specifying the time-varying model dynamics in terms of the conditional median duration, instead of the conditional mean duration. In the first half of this paper we illustrate how the BS-ACD model relates to the traditional ACD model, and in the second half we discuss the assessment of goodness-of-fit for ACD models in general. In order to facilitate both of these points, we explicitly illustrate the similarities and differences between the BS-ACD model and the Generalized Gamma ACD (GG-ACD) model by comparing and contrasting their formulation, estimation, and results from fitting both models to samples for six NYSE securities.  相似文献   

4.
One of the most striking results on asset pricing in the last 20 years is the better forecastability of long-horizon returns over one-step return forecasts. This could seem a paradox, given that the further our forecast horizon the greater the uncertainty we are bound to face. This point can be found in Campbell and Shiller [Journal of Finance 43 (1988) 661; Journal of Finance 40 (1985) 793; American Economic Review 76 (1986) 1142] among others. In this paper, we offer an alternative explanation to this “forecast paradox ” that is in agreement with Kim et al. [Review of Economic Studies 30 (1992) 25], who found that the negative serial correlation in long-horizon returns depends very much on the sample choice. Our explanation is based on the existence of simultaneous shifts in the time series of the equilibrium stock price and dividends. This explanation relies on the concept of co-breaking [D.F. Hendry, A theory of co-breaking, Mimeo, Nuffield College, Oxford, 1995.]. We put forward a stochastic present value model, in which we are able to show how shifts in the process for dividends lead to shifts in the equilibrium stock price.This has important implications for multiperiod forecasting, as we demonstrate in this paper. An empirical application supports our results. In our empirical application, we model earning, dividends, stock prices, and the risk-free interest in the United States from 1926 to 1985. We can distinguish three different historical periods where the process for dividends and the equilibrium stock prices are characterized by different properties in terms of their means and variances. Our empirical model is extended to a forecasting exercise where the “forecast paradox” is solved.  相似文献   

5.
Testing the correct model specification hypothesis for artificial neural network (ANN) models of the conditional mean is not standard. The traditional Wald, Lagrange multiplier, and quasi-likelihood ratio statistics weakly converge to functions of Gaussian processes, rather than to convenient chi-squared distributions. Also, their large-sample null distributions are problem dependent, limiting applicability. We overcome this challenge by applying functional regression methods of Cho et al. [8] to extreme learning machines (ELM). The Wald ELM (WELM) test statistic proposed here is easy to compute and has a large-sample standard chi-squared distribution under the null hypothesis of correct specification. We provide associated theory for time-series data and affirm our theory with some Monte Carlo experiments.  相似文献   

6.
Testing for stochastic dominance among distributions is an important issue in the study of asset management, income inequality, and market efficiency. This paper conducts Monte Carlo simulations to examine the sizes and powers of several commonly used stochastic dominance tests when the underlying distributions are correlated or heteroskedastic. Our Monte Carlo study shows that the test developed by Davidson and Duclos [R. Davidson, J.Y. Duclos, Statistical inference for stochastic dominance and for the measurement of poverty and inequality, Econometrica 68 (6) (2000) 1435–1464] has better size and power performances than two alternative tests developed by Kaur et al. [A. Kaur, B.L.S.P. Rao, H. Singh, Testing for second order stochastic dominance of two distributions, Econ. Theory 10 (1994) 849–866] and Anderson [G. Anderson, Nonparametric tests of stochastic dominance in income distributions, Econometrica 64 (1996) 1183–1193]. In addition, we find that when the underlying distributions are heteroskedastic, both the size and power of the test developed by Davidson and Duclos [R. Davidson, J.Y. Duclos, Statistical inference for stochastic dominance and for the measurement of poverty and inequality, Econometrica 68 (6) (2000) 1435–1464] are superior to those of the two alternative tests.  相似文献   

7.
Tourism is one of the key service industries in Thailand, with a 5.27% share of Gross Domestic Product in 2003. Since 2000, international tourist arrivals, particularly those from East Asia, to Thailand have been on a continuous upward trend. Tourism forecasts can be made based on previous observations, so that historical analysis of tourist arrivals can provide a useful understanding of inbound trips and the behaviour of trends in foreign tourist arrivals to Thailand. As tourism is seasonal, a good forecast is required for stakeholders in the industry to manage risk. Previous research on tourism forecasts has typically been based on annual and monthly data analysis, while few past empirical tourism studies using the Box–Jenkins approach have taken account of pre-testing for seasonal unit roots based on Franses [P.H. Franses, Seasonality, nonstationarity and the forecasting of monthly time series, International Journal of Forecasting 7 (1991) 199–208] and Beaulieu and Miron [J.J. Beaulieu, J.A. Miron, Seasonal unit roots in aggregate U.S. data, Journal of Econometrics 55 (1993) 305–328] framework. An analysis of the time series of tourism demand, specifically monthly tourist arrivals from six major countries in East Asia to Thailand, from January 1971 to December 2005 is examined. This paper analyses stationary and non-stationary tourist arrivals series by formally testing for the presence of unit roots and seasonal unit roots prior to estimation, model selection and forecasting. Various Box–Jenkins autoregressive integrated moving average (ARIMA) models and seasonal ARIMA models are estimated, with the tourist arrivals series showing seasonal patterns. The fitted ARIMA and seasonal ARIMA models forecast tourist arrivals from East Asia very well for the period 2006(1)–2008(1). Total monthly and annual forecasts can be obtained through temporal and spatial aggregation.  相似文献   

8.
When forecasts are assessed by a general loss (cost-of-error) function, the optimal point forecast is, in general, not the conditional mean, and depends on the conditional volatility—which, for stock returns, is time-varying. In order to provide forecasts of daily returns of 30 DJIA stocks under a general multivariate loss function, the following issues are addressed. We discuss what conditions define a multivariate loss function, and a simple class of such functions is proposed. Based on suitable combinations of univariate losses, the suggested multivariate functions are convenient for practical applications with many variables. To keep the computational aspect tractable, a flexible multivariate GARCH model is employed in estimating the conditional forecast distributions. The model easily copes with large number of series while allowing for skewness, fat tails, non-ellipticity, and tail dependence. Based on Engle’s DCC GARCH, it uses multivariate affine generalized hyperbolic distributions as conditional probability law, and the number of parameters to be estimated simultaneously does not depend on the number of series. The model is fitted using daily data from 2002 to 2007 (keeping data from 2008 for out-of-sample forecasts), and a bootstrap procedure is used to derive point forecasts under several multivariate loss functions of the proposed type.  相似文献   

9.
A sequential Monte Carlo method for estimating GARCH models subject to an unknown number of structural breaks is proposed. Particle filtering techniques allow for fast and efficient updates of posterior quantities and forecasts in real time. The method conveniently deals with the path dependence problem that arises in these types of models. The performance of the method is shown to work well using simulated data. Applied to daily NASDAQ returns, the evidence favors a partial structural break specification in which only the intercept of the conditional variance equation has breaks compared to the full structural break specification in which all parameters are subject to change. The empirical application underscores the importance of model assumptions when investigating breaks. A model with normal return innovations result in strong evidence of breaks; while more flexible return distributions such as t-innovations or a GARCH-jump mixture model still favor breaks but indicate much more uncertainty regarding the time and impact of them.  相似文献   

10.
The issue of whether or not to form a monetary union in East Asia remains a hot issue in the study of the East Asian economies. Most of the existing studies apply a framework focusing on the symmetric issue of the fundamental shocks and the extent of correlations by applying the Blanchard and Quah [O.J. Blanchard, D. Quah, The dynamic effects of aggregate demand and supply disturbances, American Economic Review 79 (1989) 655–673] structural vector autoregression (VAR) technique, which includes first-differenced variables in the model and examines only the bilateral relationships. However, the shock symmetry does not necessarily require the co-movements of the real output variables between the countries concerned. The present paper employs the Johansen [S. Johansen, Statistical analysis of cointegration vectors, Journal of Economic Dynamics and Control 12 (1988) 231–254] cointegration approach to check the long-run co-movements of real outputs among the East Asian countries, Japan and the United States to draw some implications about forming a monetary union in the region. The results suggest that some groups of Asian NIEs plus the United States will be potential candidates to form a monetary union. Mainland China is not suggested as a member country of a monetary union with any of the grouped economies. More interestingly, the ASEAN countries alone are not a feasible group to form a monetary union unless Japan is included, which has important implications for the role of Japan towards the formation of a regional monetary union.  相似文献   

11.
In this article we show via simulations how the stochastic permanent break (STOPBREAK) model proposed by Engle and Smith (1999) is related with the fractionally integrated hypotheses. This connection was established by Diebold and Inoue (2001), showing, theoretically and analytically that stochastic regime switching is easily confused with long memory. In this paper, we use a version of the tests of Robinson (1994) for testing I(d) statistical models in the context of stochastic permanent break models, and give further evidence that both types of processes are easily confused.  相似文献   

12.
Since the introduction of the Autoregressive Conditional Heteroscedasticity (ARCH) model of Engle [R. Engle, Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation, Econometrica 50 (1982) 987–1007], the literature of modelling the conditional second moment has become increasingly popular in the last two decades. Many extensions and alternate models of the original ARCH have been proposed in the literature aiming to capture the dynamics of volatility more accurately. Interestingly, the Quasi Maximum Likelihood Estimator (QMLE) with normal density is typically used to estimate the parameters in these models. As such, the higher moments of the underlying distribution are assumed to be the same as those of the normal distribution. However, various studies reveal that the higher moments, such as skewness and kurtosis of the distribution of financial returns are not likely to be the same as the normal distribution, and in some cases, they are not even constant over time. These have significant implications in risk management, especially in the calculation of Value-at-Risk (VaR) which focuses on the negative quantile of the return distribution. Failed to accurately capture the shape of the negative quantile would produce inaccurate measure of risk, and subsequently lead to misleading decision in risk management. This paper proposes a solution to model the distribution of financial returns more accurately by introducing a general framework to model the distribution of financial returns using maximum entropy density (MED). The main advantage of MED is that it provides a general framework to estimate the distribution function directly based on a given set of data, and it provides a convenient framework to model higher order moments up to any arbitrary finite order k. However this flexibility comes with a high cost in computational time as k increases, therefore this paper proposes an alternative model that would reduce computation time substantially. Moreover, the sensitivity of the parameters in the MED with respect to the dynamic changes of moments is derived analytically. This result is important as it relates the dynamic structure of the moments to the parameters in the MED. The usefulness of this approach will be demonstrated using 5 min intra-daily returns of the Euro/USD exchange rate.  相似文献   

13.
An economic off-line inspection, disposition, and rework (IDR) model for a batch produced from an unreliable production system was recently proposed by Wang et al. [Economic optimization of off-line inspection with rework consideration, European Journal of Operational Research, 194 (2009) 807–813], where the process is assumed to possess a discrete general shift distribution. Unfortunately, there are some important flaws in the proposed IDR model; in particular, while obtaining optimal IDR policy, they incorrectly assumed that the process shift distribution had the memoryless property. As a result, the purpose of this paper is to reformulate the IDR model, and to develop a solution procedure to find the optimal IDR policy.  相似文献   

14.
15.
An estimator of conditional wage distributions based on a piecewise-linear specification of the conditional hazard function is proposed. Under a minimal set of assumptions, the estimator is flexible enough to capture almost any underlying relationship, and is not affected by the curse of dimensionality. It also allows us to derive estimates of the conditional Lorenz curves and Gini indices. The methodology is used to investigate the wage trends in Spain in 1994-1999. The estimation results provide evidence that there has been strong decreases in both the returns to schooling and the inequality indices for workers with low levels of experience; these decreases may partly be explained by the “overeducation” phenomenon, which intensified in this period.  相似文献   

16.
For over 20 years the NEH heuristic of Nawaz, Enscore, and Ham [A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega, The International Journal of Management Science 1983;11:91–5] has been commonly regarded as the best heuristic for solving the NP-hard problem of minimizing the makespan in permutation flow shops. The strength of NEH lies mainly in its priority order according to which jobs are selected to be scheduled during the insertion phase. Framinan et al. [Different initial sequences for the heuristic of Nawaz, Enscore and Ham to minimize makespan, idle time or flowtime in the static permutation flowshop problem. International Journal of Production Research 2003;41:121–48] presented the results of an extensive study to conclude that the NEH priority order is superior to 136 different orders examined. Based upon the concept of Johnson's algorithm, we propose a new priority order combined with a simple tie-breaking method that leads to a heuristic that outperforms NEH for all problem sizes.  相似文献   

17.
CSP–CASL integrates the process algebra CSP [T. Hoare, Communicating Sequential Processes, Prentice-Hall, Englewood cliffs, NJ, 1985; A.W. Roscoe, The Theory and Practice of Concurrency, Prentice-Hall, Englewood cliffs, NJ, 1998] with the algebraic specification language CASL [P.D. Mosses (Ed.), CASL Reference Manual, Lecture Notes in Computer Science, Vol. 2960, Springer, Berlin, 2004; E. Astesiano, M. Bidoit, B. Krieg-Brückner, H. Kirchner, P.D. Mosses, D. Sannella, A. Tarlecki, CASL—the common algebraic specification language, Theoret. Comput. Sci. 286 (2002) 153–196]. Its novel aspects include the combination of denotational semantics in the process part and, in particular, loose semantics for the data types covering both concepts of partiality and sub-sorting. Technically, this integration involves the development of a new so-called data-logic formulated as an institution. This data-logic serves as a link between the institution underlying CASL and the alphabet of communications necessary for the CSP semantics. Besides being generic in the various denotational CSP semantics, this construction leads also to an appropriate notion of refinement with clear relations to both data refinement in CASL and process refinement in CSP.  相似文献   

18.
This paper presents a semi-parametric method of parameter estimation for the class of logarithmic ACD (Log-ACD) models using the theory of estimating functions (EF). A number of theoretical results related to the corresponding EF estimators are derived. A simulation study is conducted to compare the performance of the proposed EF estimates with corresponding ML (maximum likelihood) and QML (quasi maximum likelihood) estimates. It is argued that the EF estimates are relatively easier to evaluate and have sampling properties comparable with those of ML and QML methods. Furthermore, the suggested EF estimates can be obtained without any knowledge of the distribution of errors is known. We apply all these suggested methodology for a real financial duration dataset. Our results show that Log-ACD (1, 1) fits the data well giving relatively smaller variation in forecast errors than in Linear ACD (1, 1) regardless of the method of estimation. In addition, the Diebold–Mariano (DM) and superior predictive ability (SPA) tests have been applied to confirm the performance of the suggested methodology. It is shown that the new method is slightly better than traditional methods in practice in terms of computation; however, there is no significant difference in forecasting ability for all models and methods.  相似文献   

19.
An n-dimensional joint uniform distribution is defined as a distribution whose one-dimensional marginals are uniform on some interval I. This interval is taken to be [0,1] or, when more convenient . The specification of joint uniform distributions in a way which captures intuitive dependence structures and also enables sampling routines is considered. The question whether every n-dimensional correlation matrix can be realized by a joint uniform distribution remains open. It is known, however, that the rank correlation matrices realized by the joint normal family are sparse in the set of correlation matrices. A joint uniform distribution is obtained by specifying conditional rank correlations on a regular vine and a copula is chosen to realize the conditional bivariate distributions corresponding to the edges of the vine. In this way a distribution is sampled which corresponds exactly to the specification. The relation between conditional rank correlations on a vine and correlation matrix of corresponding distribution is complex, and depends on the copula used. Some results for the elliptical copulae are given.  相似文献   

20.
In a recent article, Wang et al. [Wang, N. S., Yi, R. H., & Wang, W. (2008). Evaluating the performances of decision-making units based on interval efficiencies. Journal of Computational and Applied Mathematics, 216, 328–343] proposed a pair of interval data envelopment analysis (DEA) models for measuring the overall performances of decision-making units (DMUs) with crisp data. In this paper, we demonstrate that interval DEA models face problems in determining the efficiency interval for each DMU when there are zero values for every input. To remedy this drawback, we propose a pair of improved interval DEA models which make it possible to perform a DEA analysis using the concepts of the best and the worst relative efficiencies. Two numerical examples will be examined using the improved interval DEA models. One of the examples is a real-world application about 42 educational departments in one of the branches of the Islamic Azad University in Iran that shows the advantages and applicability of the improved approach in real-life situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号