首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The existing estimation methods for the model parameters of the unified GARCH–Itô model (Kim and Wang, 2014 ) require long period observations to obtain the consistency. However, in practice, it is hard to believe that the structure of a stock price is stable during such a long period. In this article, we introduce an estimation method for the model parameters based on the high‐frequency financial data with a finite observation period. In particular, we establish a quasi‐likelihood function for daily integrated volatilities, and realized volatility estimators are adopted to estimate the integrated volatilities. The model parameters are estimated by maximizing the quasi‐likelihood function. We establish asymptotic theories for the proposed estimator. A simulation study is conducted to check the finite sample performance of the proposed estimator. We apply the proposed estimation approach to the Bank of America stock price data.  相似文献   

2.
A new multi‐variate stochastic volatility estimation procedure for financial time series is proposed. A Wishart autoregressive process is considered for the volatility precision covariance matrix, for the estimation of which a two step procedure is adopted. The first step is the conditional inference on the autoregressive parameters and the second step is the unconditional inference, based on a Newton‐Raphson iterative algorithm. The proposed methodology, which is mostly Bayesian, is suitable for medium dimensional data and it bridges the gap between closed‐form estimation and simulation‐based estimation algorithms. An example, consisting of foreign exchange rates data, illustrates the proposed methodology.  相似文献   

3.
Value‐at‐Risk (VaR) is a simple, but useful measure in risk management. When some volatility model is employed, conditional VaR is of importance. As autoregressive conditional heteroscedastic (ARCH) and generalized ARCH (GARCH) models are widely used in modelling volatilities, in this article, we propose empirical likelihood methods to obtain an interval estimation for the conditional VaR with the volatility model being an ARCH/GARCH model.  相似文献   

4.
Self‐normalization has been celebrated as an alternative approach for inference of time series because of its ability to avoid direct estimation of the nuisance asymptotic variance. However, when being applied to quantities other than the mean, the conventional self‐normalizer typically exhibits certain degrees of asymmetry, an undesirable feature especially for time‐reversible processes. This paper considers a new self‐normalizer for time series, which (i) provides a time‐symmetric generalization to the conventional self‐normalizer, (ii) is able to automatically reduce to the conventional self‐normalizer in the mean case where the latter is already time‐symmetric to yield a unified inference procedure, and (iii) possibly leads to narrower confidence intervals when compared with the conventional self‐normalizer. For the proposed time‐symmetric self‐normalizer, we establish the asymptotic theory for its induced inference procedure and examine its finite sample performance through numerical experiments.  相似文献   

5.
Least squares and maximum likelihood techniques have long been used in parameter estimation problems. However, those techniques provide only point estimates with unknown or approximate uncertainty information. Bayesian inference coupled with the Gibbs Sampler is an approach to parameter estimation that exploits modern computing technology. The estimation results are complete with exact uncertainty information. The Error‐in‐Variables model (EVM) approach is investigated in this study. In it, both dependent and independent variables contain measurement errors, and the true values and uncertainties of all measurements are estimated. This EVM set‐up leads to unusually large dimensionality in the estimation problem, which makes parameter estimation very difficult with classical techniques. In this paper, an innovative way of performing parameter estimation is introduced to chemical engineers. The paper shows that the method is simple and efficient; as well, complete and accurate uncertainty information about parameter estimates is readily available. Two real‐world EVM examples are demonstrated: a large‐scale linear model and an epidemiological model. The former is simple enough for most readers to understand the new concepts without difficulty. The latter has very interesting features in that a Poisson distribution is assumed, and a parameter with known distribution is retained while other unknown parameters are estimated. The Gibbs Sampler results are compared with those of the least squares.  相似文献   

6.
Tsai and Chan (2003) has recently introduced the Continuous‐time Auto‐Regressive Fractionally Integrated Moving‐Average (CARFIMA) models useful for studying long‐memory data. We consider the estimation of the CARFIMA models with discrete‐time data by maximizing the Whittle likelihood. We show that the quasi‐maximum likelihood estimator is asymptotically normal and efficient. Finite‐sample properties of the quasi‐maximum likelihood estimator and those of the exact maximum likelihood estimator are compared by simulations. Simulations suggest that for finite samples, the quasi‐maximum likelihood estimator of the Hurst parameter is less biased but more variable than the exact maximum likelihood estimator. We illustrate the method with a real application.  相似文献   

7.
We develop a robust least squares estimator for autoregressions with possibly heavy tailed errors. Robustness to heavy tails is ensured by negligibly trimming the squared error according to extreme values of the error and regressors. Tail‐trimming ensures asymptotic normality and super‐‐convergence with a rate comparable to the highest achieved amongst M‐estimators for stationary data. Moreover, tail‐trimming ensures robustness to heavy tails in both small and large samples. By comparison, existing robust estimators are not as robust in small samples, have a slower rate of convergence when the variance is infinite, or are not asymptotically normal. We present a consistent estimator of the covariance matrix and treat classic inference without knowledge of the rate of convergence. A simulation study demonstrates the sharpness and approximate normality of the estimator, and we apply the estimator to financial returns data. Finally, tail‐trimming can be easily extended beyond least squares estimation for a linear stationary AR model. We discuss extensions to quasi‐maximum likelihood for GARCH, weighted least squares for a possibly non‐stationary random coefficient autoregression, and empirical likelihood for robust confidence region estimation, in each case for models with possibly heavy tailed errors.  相似文献   

8.
Based on the concept of a Lévy copula to describe the dependence structure of a multi‐variate Lévy process, we present a new estimation procedure. We consider a parametric model for the marginal Lévy processes as well as for the Lévy copula and estimate the parameters by a two‐step procedure. We first estimate the parameters of the marginal processes and then estimate in a second step only the dependence structure parameter. For infinite Lévy measures, we truncate the small jumps and base our statistical analysis on the large jumps of the model. Prominent example will be a bivariate stable Lévy process, which allows for analytic calculations and, hence, for a comparison of different methods. We prove asymptotic normality of the parameter estimates from the two‐step procedure, and in particular, we derive the Godambe information matrix, whose inverse is the covariance matrix of the normal limit law. A simulation study investigates the loss of efficiency because of the two‐step procedure and the truncation.  相似文献   

9.
The availability of high‐frequency financial data has led to substantial improvements in our understanding of financial volatility. Most existing literature focuses on estimating the integrated volatility over a fixed period. This article proposes a non‐parametric threshold kernel method to estimate the time‐dependent spot volatility and jumps when the underlying price process is governed by Brownian semimartingale with finite activity jumps. The threshold kernel estimator combines the threshold estimation for integrated volatility and the kernel filtering approach for spot volatility when the price process is driven only by diffusions without jumps. The estimator proposed is consistent and asymptotically normal and has the same rate of convergence as the estimator studied by Kristensen (2010) in a setting without jumps. The Monte Carlo simulation study shows that the proposed estimator exhibits excellent performance over a wide range of jump sizes and for different sampling frequencies. An empirical example is given to illustrate the potential applications of the proposed method.  相似文献   

10.
Many empirical findings show that volatility in financial time series exhibits high persistence. Some researchers argue that such persistency is due to volatility shifts in the market, while others believe that this is a natural fluctuation explained by stationary long‐range dependence models. These two approaches confuse many practitioners, and forecasts for future volatility are dramatically different depending on which models to use. In this article, therefore, we consider a statistical testing procedure to distinguish volatility shifts in generalized AR conditional heteroscedasticity (GARCH) model against long‐range dependence. Our testing procedure is based on the residual‐based cumulative sum test, which is designed to correct the size distortion observed for GARCH models. We examine the validity of our method by providing asymptotic distributions of test statistic. Also, Monte Carlo simulations study shows that our proposed method achieves a good size while providing a reasonable power against long‐range dependence. It is also observed that our test is robust to the misspecified GARCH models.  相似文献   

11.
The rescaled fourth‐order cumulant of the unobserved innovations of linear time series is an important parameter in statistical inference. This article deals with the problem of estimating this parameter. An existing nonparametric estimator is first discussed, and its asymptotic properties are derived. It is shown how the autocorrelation structure of the underlying process affects the behaviour of the estimator. Based on our findings and on an important invariance property of the parameter of interest with respect to linear filtering, a pre‐whitening‐based nonparametric estimator of the same parameter is proposed. The estimator is obtained using the filtered time series only; that is, an inversion of the pre‐whitening procedure is not required. The asymptotic properties of the new estimator are investigated, and its superiority is established for large classes of stochastic processes. It is shown that for the particular estimation problem considered, pre‐whitening can reduce the variance and the bias of the estimator. The finite sample performance of both estimators is investigated by means of simulations. The new estimator allows for a simple modification of the multiplicative frequency domain bootstrap, which extends its considerable range of validity. Furthermore, the problem of testing hypotheses about the rescaled fourth‐order cumulant of the unobserved innovations is also considered. In this context, a simple test for Gaussianity is proposed. Some real‐life data applications are presented.  相似文献   

12.
The log‐Gaussian Cox process is a flexible and popular stochastic process for modeling point patterns exhibiting spatial and space‐time dependence. Model fitting requires approximation of stochastic integrals which is implemented through discretization over the domain of interest. With fine scale discretization, inference based on Markov chain Monte Carlo is computationally burdensome because of the cost of matrix decompositions and storage, such as the Cholesky, for high dimensional covariance matrices associated with latent Gaussian variables. This article addresses these computational bottlenecks by combining two recent developments: (i) a data augmentation strategy that has been proposed for space‐time Gaussian Cox processes that is based on exact Bayesian inference and does not require fine grid approximations for infinite dimensional integrals, and (ii) a recently developed family of sparsity‐inducing Gaussian processes, called nearest‐neighbor Gaussian processes, to avoid expensive matrix computations. Our inference is delivered within the fully model‐based Bayesian paradigm and does not sacrifice the richness of traditional log‐Gaussian Cox processes. We apply our method to crime event data in San Francisco and investigate the recovery of the intensity surface.  相似文献   

13.
Abstract. This article introduces a family of ‘generalized long‐memory time series models’, in which observations have a specified conditional distribution, given a latent Gaussian fractionally integrated autoregressive moving‐average (ARFIMA) process. The observations may have discrete or continuous distributions (or a mixture of both). The family includes existing models such as ARFIMA models themselves, long‐memory stochastic volatility models, long‐memory censored Gaussian models and others. Although the family of models is flexible, the latent long‐memory process poses problems for analysis. Therefore, we introduce a Markov chain Monte Carlo sampling algorithm and develop a set of recursions which makes it feasible. This makes it possible, among other things, to carry out exact likelihood‐based analysis of a wide range of non‐Gaussian long‐memory models without resorting to the use of likelihood approximations. The procedure also yields predictive distributions that take into account model parameter uncertainty. The approach is demonstrated in two case studies.  相似文献   

14.
In this article, we introduce the general setting of a multivariate time series autoregressive model with stochastic time‐varying coefficients and time‐varying conditional variance of the error process. This allows modelling VAR dynamics for non‐stationary time series and estimation of time‐varying parameter processes by the well‐known rolling regression estimation techniques. We establish consistency, convergence rates, and asymptotic normality for kernel estimators of the paths of coefficient processes and provide pointwise valid standard errors. The method is applied to a popular seven‐variable dataset to analyse evidence of time variation in empirical objects of interest for the DSGE (dynamic stochastic general equilibrium) literature.  相似文献   

15.
We approach the problem of non‐parametric estimation for autoregressive Markov switching processes. In this context, the Nadaraya–Watson‐type regression functions estimator is interpreted as a solution of a local weighted least‐square problem, which does not admit a closed‐form solution in the case of hidden Markov switching. We introduce a non‐parametric recursive algorithm to approximate the estimator. Our algorithm restores the missing data by means of a Monte Carlo step and estimates the regression function via a Robbins–Monro step. We prove that non‐parametric autoregressive models with Markov switching are identifiable when the hidden Markov process has a finite state space. Consistency of the estimator is proved using the strong α‐mixing property of the model. Finally, we present some simulations illustrating the performances of our non‐parametric estimation procedure.  相似文献   

16.
Abstract. This article describes a polynomial estimation technique based on the state‐space model and develops an estimation method for the quadratic estimation problem by applying the multivariate recursive least squares (RLS) Wiener estimator to the quadratic estimation of a stochastic signal in linear discrete‐time stochastic systems. The augmented signal vector includes the signal to be estimated and its quadratic quantity. The augmented signal vector is modelled in terms of an autoregressive model of appropriate order. A numerical simulation example for the speech signal as a practical stochastic signal is implemented and its estimation accuracy is found to be considerably improved in comparison with the existing RLS Wiener estimators. The proposed method may be applied advantageously to the quadratic estimations of wide‐sense stationary stochastic signals in general.  相似文献   

17.
A time‐varying autoregression is considered with a similarity‐based coefficient and possible drift. It is shown that the random‐walk model has a natural interpretation as the leading term in a small‐sigma expansion of a similarity model with an exponential similarity function as its AR coefficient. Consistency of the quasi‐maximum likelihood estimator of the parameters in this model is established, the behaviours of the score and Hessian functions are analysed and test statistics are suggested. A complete list is provided of the normalization rates required for the consistency proof and for the score and Hessian function standardization. A large family of unit root models with stationary and explosive alternatives is characterized within the similarity class through the asymptotic negligibility of a certain quadratic form that appears in the score function. A variant of the stochastic unit root model within the class is studied, and a large‐sample limit theory provided, which leads to a new nonlinear diffusion process limit showing the form of the drift and conditional volatility induced by sustained stochastic departures from unity. The findings provide a composite case for time‐varying coefficient dynamic modelling. Some simulations and a brief empirical application to data on international Exchange Traded Funds are included. Copyright © 2014 Wiley Publishing Ltd  相似文献   

18.
We study inference and diagnostics for count time series regression models that include a feedback mechanism. In particular, we are interested in negative binomial processes for count time series. We study probabilistic properties and quasi‐likelihood estimation for this class of processes. We show that the resulting estimators are consistent and asymptotically normally distributed. These facts enable us to construct probability integral transformation plots for assessing any assumed distributional assumptions. The key observation in developing the theory is a mean parameterized form of the negative binomial distribution. For transactions data, it is seen that the negative binomial distribution offers a better fit than the Poisson distribution. This is an immediate consequence of the fact that transactions can be represented as a collection of individual activities that correspond to different trading strategies.  相似文献   

19.
This article develops on‐line inference for the multivariate local level model, with the focus being placed on covariance estimation of the innovations. We assess the application of the inverse Wishart prior distribution in this context and find it too restrictive since the serial correlation structure of the observation and state innovations are forced to be the same. We generalize the inverse Wishart distribution to allow for a more convenient correlation structure, but still retaining approximate conjugacy. We prove some relevant results for the new distribution and we develop approximate Bayesian inference, which allows simultaneous forecasting of time series data and estimation of the covariance of the innovations of the model. We provide results on the steady state of the level of the time series, which are deployed to achieve computational savings. Using Monte Carlo experiments, we compare the proposed methodology with existing estimation procedures. An example with real data consisting of production data from an industrial process is given.  相似文献   

20.
In this article, change‐point problems for long‐memory stochastic volatility (LMSV) models are considered. A general testing problem which includes various alternative hypotheses is discussed. Under the hypothesis of stationarity the limiting behavior of CUSUM‐ and Wilcoxon‐type test statistics is derived. In this context, a limit theorem for the two‐parameter empirical process of LMSV time series is proved. In particular, it is shown that the asymptotic distribution of CUSUM test statistics may not be affected by long memory, unlike Wilcoxon test statistics which are typically influenced by long‐range dependence. To avoid the estimation of nuisance parameters in applications, the usage of self‐normalized test statistics is proposed. The theoretical results are accompanied by an analysis of Standard & Poor's 500 daily closing indices with respect to structural changes and by simulation studies which characterize the finite sample behavior of the considered testing procedures when testing for changes in mean and in variance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号