首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of-sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model.  相似文献   

2.
This paper studies a heavy-tailed stochastic volatility (SV) model with leverage effect, where a bivariate Student-t distribution is used to model the error innovations of the return and volatility equations. Choy et al. (2008) studied this model by expressing the bivariate Student-t distribution as a scale mixture of bivariate normal distributions. We propose an alternative formulation by first deriving a conditional Student-t distribution for the return and a marginal Student-t distribution for the log-volatility and then express these two Student-t distributions as a scale mixture of normal (SMN) distributions. Our approach separates the sources of outliers and allows for distinguishing between outliers generated by the return process or by the volatility process, and hence is an improvement over the approach of Choy et al. (2008). In addition, it allows an efficient model implementation using the WinBUGS software. A simulation study is conducted to assess the performance of the proposed approach and its comparison with the approach by Choy et al. (2008). In the empirical study, daily exchange rate returns of the Australian dollar to various currencies and daily stock market index returns of various international stock markets are analysed. Model comparison relies on the Deviance Information Criterion and convergence diagnostic is monitored by Geweke’s convergence test.  相似文献   

3.
Stochastic volatility (SV) models usually assume that the distribution of asset returns conditional on the latent volatility is normal. This article analyzes SV models with a mixture-of-normal distributions in order to compare with other heavy-tailed distributions such as the Student-t distribution and generalized error distribution (GED). A Bayesian method via Markov-chain Monte Carlo (MCMC) techniques is used to estimate parameters and Bayes factors are calculated to compare the fit of distributions. The method is illustrated by analyzing daily data from the Yen/Dollar exchange rate and the Tokyo stock price index (TOPIX). According to Bayes factors, we find that while the t distribution fits the TOPIX better than the normal, the GED and the normal mixture, the mixture-of-normal distributions give a better fit to the Yen/Dollar exchange rate than other models. The effects of the specification of error distributions on the Bayesian confidence intervals of future returns are also examined. Comparison of SV with GARCH models shows that there are cases that the SV model with the normal distribution is less effective to capture leptokurtosis than the GARCH with heavy-tailed distributions.  相似文献   

4.
This paper presents a heavy-tailed mixture model for describing time-varying conditional distributions in time series of returns on prices. Student-t component distributions are taken to capture the heavy tails typically encountered in such financial data. We design a mixture MT(m)-GARCH(p, q) volatility model for returns, and develop an EM algorithm for maximum likelihood estimation of its parameters. This includes formulation of proper temporal derivatives for the volatility parameters. The experiments with a low order MT(2)-GARCH(1, 1) show that it yields results with improved statistical characteristics and economic performance compared to linear and nonlinear heavy-tail GARCH, as well as normal mixture GARCH. We demonstrate that our model leads to reliable Value-at-Risk performance in short and long trading positions across different confidence levels.  相似文献   

5.
It is now a well-established fact that search algorithms can exhibit heavy-tailed behavior. However, the reasons behind this fact are not well understood. We provide a generative search tree model whose distribution of the number of nodes visited during search is formally heavy-tailed. Our model allows us to generate search trees with any degree of heavy-tailedness. We also show how the different regimes observed for the runtime distributions of backtrack search methods across different constrainedness regions of random CSP models can be captured by a mixture of the so-called stable distributions.  相似文献   

6.
This paper presents a type of heavy-tailed market microstructure models with the scale mixtures of normal distributions (MM-SMN), which include two specific sub-classes, viz. the slash and the Student-t distributions. Under a Bayesian perspective, the Markov Chain Monte Carlo (MCMC) method is constructed to estimate all the parameters and latent variables in the proposed MM-SMN models. Two evaluating indices, namely the deviance information criterion (DIC) and the test of white noise hypothesis on the standardised residual, are used to compare the MM-SMN models with the classic normal market microstructure (MM-N) model and the stochastic volatility models with the scale mixtures of normal distributions (SV-SMN). Empirical studies on daily stock return data show that the MM-SMN models can accommodate possible outliers in the observed returns by use of the mixing latent variable. These results also indicate that the heavy-tailed MM-SMN models have better model fitting than the MM-N model, and the market microstructure model with slash distribution (MM-s) has the best model fitting. Finally, the two evaluating indices indicate that the market microstructure models with three different distributions are superior to the corresponding stochastic volatility models.  相似文献   

7.
The aim of this paper is to derive diagnostic procedures based on case-deletion model for symmetrical nonlinear regression models, which complements Galea et al. (2005) that developed local influence diagnostics under some perturbation schemes. This class of models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic and contaminated normal, among others. Thus, these models can be checked for robustness to outliers in the response variable and diagnostic methods may be a useful tool for an appropriate choice. First, an iterative process for the parameter estimation as well as some inferential results are presented. Besides, we present the results of a simulation study in which the characteristics of heavy-tailed models are evaluated in the presence of outliers. Then, we derive some diagnostic measures such as Cook distance, W-K statistic, one-step approach and likelihood displacement, generalizing results obtained for normal nonlinear regression models. Also, we present simulation studies that illustrate the behavior of diagnostic measures proposed. Finally, we consider two real data sets previously analyzed under normal nonlinear regression models. The diagnostic analysis indicates that a Student-t nonlinear regression model seems to fit the data better than the normal nonlinear regression model as well as other symmetrical nonlinear models in the sense of robustness against extreme observations.  相似文献   

8.
Monte Carlo simulations may involve skewed, heavy-tailed distributions. When variances of those distributions exist, statistically valid confidence intervals can be obtained using the central limit theorem, providing that the simulation is run “long enough.” If variances do not exist, however, valid confidence intervals are difficult or impossible to obtain. The main result in this paper establishes that upon replacing ordinary Monte Carlo sampling of such heavy-tailed distributions with ex post facto sampling, estimates having finite moments of all orders are ensured for the most common class of infinite variance distributions. We conjecture that this phenomenon applies to all distributions (having finite means) when the ex post facto process is iterated.  相似文献   

9.
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models.  相似文献   

10.
We introduce a scalable searching protocol for locating and retrieving content in random networks with heavy-tailed and in particular power-law (PL) degree distributions. The proposed algorithm is capable of finding any content in the network with probability one   in time O(logN)O(logN), with a total traffic that provably scales sub-linearly with the network size, N. Unlike other proposed solutions, there is no need to assume that the network has multiple copies of contents; the protocol finds all contents reliably, even if every node in the network starts with a unique content. The scaling behavior of the size of the giant connected component of a random graph with heavy-tailed degree distributions under bond percolation is at the heart of our results. The percolation search algorithm can be directly applied to make unstructured peer-to-peer (P2P) networks, such as Gnutella, Limewire and other file-sharing systems (which naturally display heavy-tailed degree distributions and approximate scale-free network structures), scalable. For example, simulations of the protocol on the limewire crawl number 5 network [Ripeanu et al., Mapping the Gnutella network: properties of large-scale peer-to-peer systems and implications for system design, IEEE Internet Comput. J. 6 (1) (2002)], consisting of over 65,000 links and 10,000 nodes, shows that even for this snapshot network, the traffic can be reduced by a factor of at least 100, and yet achieve a hit-rate greater than 90%.  相似文献   

11.
This paper addresses the problem of adaptive detection of radar targets embedded in heterogeneous compound-Gaussian clutter environments. Based on the Bayesian theory, a priori knowledge of clutter is utilized to improve detection performance. The clutter texture is modeled by the inverse Gaussian distribution to describe the heavy-tailed clutter. Furthermore, clutter's heterogeneity results in insufficient secondary data, and the inverse complex Wishart distribution is exploited to model the speckle covariance matrix. Based on a priori distributions of clutter, a novel detector without using secondary data is derived via the generalized likelihood ratio test (GLRT). Monte Carlo experiments are performed to evaluate the detection performance of the proposed detector. Experimental results illustrate that the proposed detector outperforms its competitors in scenarios with limited secondary data.  相似文献   

12.
In this paper, we study the efficiency of egoistic and altruistic strategies within the model of social dynamics determined by voting in a stochastic environment (the ViSE model) using two criteria: maximizing the average capital increment and minimizing the number of bankrupt participants. The proposals are generated stochastically; three families of the corresponding distributions are considered: normal distributions, symmetrized Pareto distributions, and Student’s t-distributions. It is found that the “pit of losses” paradox described earlier does not occur in the case of heavy-tailed distributions. The egoistic strategy better protects agents from extinction in aggressive environments than the altruistic ones, however, the efficiency of altruism is higher in more favorable environments. A comparison of altruistic strategies with each other shows that in aggressive environments, everyone should be supported to minimize extinction, while under more favorable conditions, it is more efficient to support the weakest participants. Studying the dynamics of participants’ capitals we identify situations where the two considered criteria contradict each other. At the next stage of the study, combined voting strategies and societies involving participants with selfish and altruistic strategies will be explored.  相似文献   

13.
Over the last decades, the α-stable distribution has proved to be a very efficient model for impulsive data. In this paper, we propose an extension of stable distributions, namely mixture of α-stable distributions to model multimodal, skewed and impulsive data. A fully Bayesian framework is presented for the estimation of the stable density parameters and the mixture parameters. As opposed to most previous work on mixture models, the model order is assumed unknown and is estimated using reversible jump Markov chain Monte Carlo. It is important to note that the Gaussian mixture model is a special case of the presented model which provides additional flexibility to model skewed and impulsive phenomena. The algorithm is tested using synthetic and real data, accurately estimating α-stable parameters, mixture coefficients and the number of components in the mixture.  相似文献   

14.
The aim of this paper is to derive local influence curvatures under various perturbation schemes for elliptical linear models with longitudinal structure. The elliptical class provides a useful generalization of the normal model since it covers both light- and heavy-tailed distributions for the errors, such as Student-t, power exponential, contaminated normal, among others. It is well known that elliptical models with longer-than-normal tails may present robust parameter estimates against outlying observations. However, little has been investigated on the robustness aspects of the parameter estimates against perturbation schemes. We use appropriate derivative operators to express the normal curvatures in tractable forms for any correlation structure. Estimation procedures for the position and variance-covariance parameters are also presented. A data set previously analyzed under a normal linear mixed model is reanalyzed under elliptical models. Local influence graphics are used to select less sensitive models with respect to some perturbation schemes.  相似文献   

15.
Corporate credit rating systems have been an integral part of expert decision making of financial institutions for the last four decades. They are embedded into the pricing function determining the interest rate of a loan contact, and play crucial role in the credit approval process. However, the currently employed intelligent systems are based on assumptions that completely ignore two key characteristics of financial data, namely their heavy-tailed actual distributions, and their time-series nature. These unrealistic assumptions definitely undermine the performance of the resulting corporate credit rating systems used to inform expert decisions. To address these shortcomings, in this work we propose a novel corporate credit rating system based on Student’s-t hidden Markov models (SHMMs), which are a well-established method for modeling heavy-tailed time-series data: Under our approach, we use a properly selected set of financial ratios to perform credit scoring, which we model via SHMMs. We evaluate our method using a dataset pertaining to Greek corporations and SMEs; this dataset includes five-year financial data, and delinquency behavioral information. We perform extensive comparisons of the credit risk assessments obtained from our method with other models commonly used by financial institutions. As we show, our proposed system yields significantly more reliable predictions, offering a valuable new intelligent system to bank experts, to assist their decision making.  相似文献   

16.
DNA microarray has been recognized as being an important tool for studying the expression of thousands of genes simultaneously. These experiments allow us to compare two different samples of cDNA obtained under different conditions. A novel method for the analysis of replicated microarray experiments based upon the modelling of gene expression distribution as a mixture of α-stable distributions is presented. Some features of the distribution of gene expression, such as Pareto tails and the fact that the variance of any given array increases concomitantly with an increase in the number of genes studied, suggest the possibility of modelling gene expression distribution on the basis of α-stable density. The proposed methodology uses very well known properties of α-stable distribution, such as the scale mixture of normals. A Bayesian log-posterior odds is calculated, which allows us to decide whether a gene is expressed differentially or not. The proposed methodology is illustrated using simulated and experimental data and the results are compared with other existing statistical approaches. The proposed heavy-tail model improves the performance of other distributions and is easily applicable to microarray gene data, specially if the dataset contains outliers or presents high variance between replicates.  相似文献   

17.
A variety of methods of modelling overdispersed count data are compared. The methods are classified into three main categories. The first category are ad hoc methods (i.e. pseudo-likelihood, (extended) quasi-likelihood, double exponential family distributions). The second category are discretized continuous distributions and the third category are observational level random effects models (i.e. mixture models comprising explicit and non-explicit continuous mixture models and finite mixture models). The main focus of the paper is a family of mixed Poisson distributions defined so that its mean μ is an explicit parameter of the distribution. This allows easier interpretation when μ is modelled using explanatory variables and provides a more orthogonal parameterization to ease model fitting. Specific three parameter distributions considered are the Sichel and Delaporte distributions. A new four parameter distribution, the Poisson-shifted generalized inverse Gaussian distribution is introduced, which includes the Sichel and Delaporte distributions as a special and a limiting case respectively. A general formula for the derivative of the likelihood with respect to μ, applicable to the whole family of mixed Poisson distributions considered, is given. Within the framework introduced here all parameters of the distributions are modelled as parametric and/or nonparametric (smooth) functions of explanatory variables. This provides a very flexible way of modelling count data. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models.  相似文献   

18.
The heavy-tailed phenomenon that characterises the runtime distributions of backtrack search procedures has received considerable attention over the past few years. Some have conjectured that heavy-tailed behaviour is largely due to the characteristics of the algorithm used. Others have conjectured that problem structure is a significant contributor. In this paper we attempt to explore the former hypothesis, namely we study how variable and value ordering heuristics impact the heavy-tailedness of runtime distributions of backtrack search procedures. We demonstrate that heavy-tailed behaviour can be eliminated from particular classes of random problems by carefully selecting the search heuristics, even when using chronological backtrack search. We also show that combinations of good search heuristics can eliminate heavy tails from quasigroups with holes of order 10 and 20, and give some insights into why this is the case. These results motivate a more detailed analysis of the effects that variable and value orderings can have on heavy-tailedness. We show how combinations of variable and value ordering heuristics can result in a runtime distribution being inherently heavy-tailed. Specifically, we show that even if we were to use an oracle to refute insoluble subtrees optimally, for some combinations of heuristics we would still observe heavy-tailed behaviour. Finally, we study the distributions of refutation sizes found using different combinations of heuristics and gain some further insights into what characteristics tend to give rise to heavy-tailed behaviour.  相似文献   

19.
This article devotes to studying the variance change-points problem in student t linear regression models. By exploiting the equivalence of the student t distribution and an appropriate scale mixture of normal distributions, a Bayesian approach combined with Gibbs sampling is developed to detect the single and multiple change points. Some simulation studies are performed to display the process of the detection and investigate the effects of the developed approach. Finally, for illustration, the Dow Jones index closed data of U.S. market are analyzed and three variance change-points are detected.  相似文献   

20.
It is common in epidemiology and other fields that the analyzing data is collected with error-prone observations and the variances of the measurement errors change across observations. Heteroscedastic measurement error (HME) models have been developed for such data. This paper extends the structural HME model to situations in which the observations jointly follow scale mixtures of normal (SMN) distribution. We develop the EM algorithm to compute the maximum likelihood estimates for the model with and without equation error respectively, and derive closed forms of asymptotic variances. We also conduct simulations to verify the effective of the EM estimates and confirm their robust behaviors based on heavy-tailed SMN distributions. A practical application is reported for the data from the WHO MONICA Project on cardiovascular disease.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号