首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
This paper concerns the application of copula functions in VaR valuation. The copula function is used to model the dependence structure of multivariate assets. After the introduction of the traditional Monte Carlo simulation method and the pure copula method we present a new algorithm based on mixture copula functions and the dependence measure, Spearman’s rho. This new method is used to simulate daily returns of two stock market indices in China, Shanghai Stock Composite Index and Shenzhen Stock Composite Index, and then empirically calculate six risk measures including VaR and conditional VaR. The results are compared with those derived from the traditional Monte Carlo method and the pure copula method. From the comparison we show that the dependence structure between asset returns plays a more important role in valuating risk measures comparing with the form of marginal distributions.  相似文献   

2.
The effect on the estimation of the Value at Risk when dealing with multivariate portfolios when there is a misspecification both in the marginals and in the copulas is investigated. It is first shown that, when there is skewness in the data and symmetric marginals are used, the estimated elliptical (normal or t) copula correlations are negatively biased, reaching values as high as 70% of the true values. Besides, the bias almost doubles if negative correlations are considered, compared to positive correlations. As for the t copula degrees of freedom parameter, the use of wrong marginals delivers large positive biases, instead. If the dependence structure is represented by a copula which is not elliptical, e.g. the Clayton copula, the effects of marginal misspecifications on the copula parameter estimation can be rather different, depending on the sign of marginal skewness. Extensive Monte Carlo studies then show that the misspecifications in the marginal volatility equation more than offset the biases in copula parameters when VaR forecasting is of concern, small samples are considered and the data are leptokurtic. The biases in the volatility parameters are much smaller, whereas those ones in the copula parameters remain almost unchanged or even increase when the sample dimension increases. In this case, copula misspecifications do play a role for VaR estimation. However, these effects depend heavily on the sign of the dependence.  相似文献   

3.
We examine the dependence structure of electricity spot prices across regional markets in Australia. One of the major objectives in establishing a national electricity market was to provide a nationally integrated and efficient electricity market, limiting market power of generators in the separate regional markets. Our analysis is based on a GARCH approach to model the marginal price series in the considered regions in combination with copulae to capture the dependence structure between the marginals. We apply different copula models including Archimedean, elliptical and copula mixture models. We find a positive dependence structure between the prices for all considered markets, while the strongest dependence is exhibited between markets that are connected via interconnector transmission lines. Regarding the nature of dependence, the Student-t copula provides a good fit to the data, while the overall best results are obtained using copula mixture models due to their ability to also capture asymmetric dependence in the tails of the distribution. Interestingly, our results also suggest that for the four major markets, NSW, QLD, SA and VIC, the degree of dependence has decreased starting from the year 2008 towards the end of the sample period in 2010. Examining the Value-at-Risk of stylized portfolios constructed from electricity spot contracts in different markets, we find that the Student-t and mixture copula models outperform the Gaussian copula in a backtesting study. Our results are important for risk management and hedging decisions of market participants, in particular for those operating in several regional markets simultaneously.  相似文献   

4.
In RBDO, input uncertainty models such as marginal and joint cumulative distribution functions (CDFs) need to be used. However, only limited data exists in industry applications. Thus, identification of the input uncertainty model is challenging especially when input variables are correlated. Since input random variables, such as fatigue material properties, are correlated in many industrial problems, the joint CDF of correlated input variables needs to be correctly identified from given data. In this paper, a Bayesian method is proposed to identify the marginal and joint CDFs from given data where a copula, which only requires marginal CDFs and correlation parameters, is used to model the joint CDF of input variables. Using simulated data sets, performance of the Bayesian method is tested for different numbers of samples and is compared with the goodness-of-fit (GOF) test. Two examples are used to demonstrate how the Bayesian method is used to identify correct marginal CDFs and copula.  相似文献   

5.
A new semiparametric dynamic copula model is proposed where the marginals are specified as parametric GARCH-type processes, and the dependence parameter of the copula is allowed to change over time in a nonparametric way. A straightforward two-stage estimation method is given by local maximum likelihood for the dependence parameter, conditional on consistent first stage estimates of the marginals. First, the properties of the estimator are characterized in terms of bias and variance and the bandwidth selection problem is discussed. The proposed estimator attains the semiparametric efficiency bound and its superiority is demonstrated through simulations. Finally, the wide applicability of the model in financial time series is illustrated, and it is compared with traditional models based on conditional correlations.  相似文献   

6.

This paper develops a new time-varying mixture copula, in which the dynamic weights of four distinct copulas are determined by a two-stratum process, to investigate the magnitude of tail dependence in four independent quadrants. In the two-stratum process, the weight of each copula is determined firstly by the relative importance of positive and negative dependence structures, and then by its own past values and adjustment processes. The weighting mechanism is time-varying in each stratum. This new specification is applied to analyze the asymmetric tail dependencies between the stock and exchange rate markets. Empirical results show four interesting findings. First, the quasi-maximum likelihood estimation (QMLE) has a better fitting ability than does the inference function for margins. The relative efficiency of the QMLE is irrespective of marginal specifications. Second, the goodness-of-fit tests of the new time-varying mixture copula are crucially affected by the marginal specifications. Third, estimation methods impact mixture weights. Four distinct tail dependencies are observed, revealing the importance of considering all four tails concurrently, and not just parts of the four tails. Fourth, the asymmetric positive and negative dependencies are significant. Each country shows a similar pattern of asymmetric negative dependence, but a different pattern of asymmetric positive dependence. These empirical findings provide important portfolio allocation implications.

  相似文献   

7.
Modeling the dependence of credit ratings is an important issue for portfolio credit risk analysis. Multivariate Markov chain models are a feasible mathematical tool for modeling the dependence of credit ratings. Here we develop a flexible multivariate Markov chain model for modeling the dependence of credit ratings. The proposed model provides a parsimonious way to capture both the cross-sectional and temporal associations among ratings of individual entities. The number of model parameters is of the magnitude O(sm 2 + s 2 m), where m is the number of ratings categories and s is the number of entities in a credit portfolio. The proposed model is also easy to implement. The estimation method is formulated as a set of s linear programming problems and the estimation algorithm can be implemented easily in a Microsoft EXCEL worksheet, see Ching et al. Int J Math Educ Sci Eng 35:921–932 (2004). We illustrate the practical implementation of the proposed model using real ratings data. We evaluate risk measures, such as Value at Risk and Expected Shortfall, for a credit portfolio using the proposed model and compare the risk measures with those arising from Ching et al. IMRPreprintSeries (2007), Siu et al. Quant Finance 5:543–556 (2005).  相似文献   

8.
Penalized B-splines combined with the composite link model are used to estimate a bivariate density from a histogram with wide bins. The goals are multiple: they include the visualization of the dependence between the two variates, but also the estimation of derived quantities like Kendall’s tau, conditional moments and quantiles. Two strategies are proposed: the first one is semiparametric with flexible margins modeled using B-splines and a parametric copula for the dependence structure; the second one is nonparametric and is based on Kronecker products of the marginal B-spline bases. Frequentist and Bayesian estimations are described. A large simulation study quantifies the performances of the two methods under different dependence structures and for varying strengths of dependence, sample sizes and amounts of grouping. It suggests that Schwarz’s BIC is a good tool for classifying the competing models. The density estimates are used to evaluate conditional quantiles in two applications in social and in medical sciences.  相似文献   

9.
This paper develops value at risk (VAR) measures for portfolios of correlated financial assets, without assuming normal returns. The approach can cope with any distribution for marginal returns, the fat–tailed ones included. We provide VAR bounds which hold independently of the joint distribution of returns and their dependence structure. The lower bound can be interpreted as a worst–case scenario VAR. We show that it not only requires little information, but is also easy to compute. In this sense, we suggest it as a practical device for portfolio managers. An application to portfolios of stock–market indices is provided. Comparisons with the VAR values under the normality assumption on returns are discussed.  相似文献   

10.
Multivariate time series are ubiquitous among a broad array of applications and often include both categorical and continuous series. Further, in many contexts, the continuous variable behaves nonlinearly conditional on a categorical time series. To accommodate the complexity of this structure, we propose a multi-regime smooth transition model where the transition variable is derived from the categorical time series and the degree of smoothness in transitioning between regimes is estimated from the data. The joint model for the continuous and ordinal time series is developed using a Bayesian hierarchical approach and thus, naturally, quantifies different sources of uncertainty. Additionally, we allow a general number of regimes in the smooth transition model and, for estimation, propose an efficient Markov chain Monte Carlo algorithm by blocking the parameters. Moreover, the model can be effectively used to draw inference on the behavior within and between regimes, as well as inference on regime probabilities. In order to demonstrate the frequentist properties of the proposed Bayesian estimators, we present the results of a comprehensive simulation study. Finally, we illustrate the utility of the proposed model through the analysis of two macroeconomic time series.  相似文献   

11.
We propose an approach for dependence tree structure learning via copula. A nonparametric algorithm for copula estimation is presented. Then a Chow-Liu like method based on dependence measure via copula is proposed to estimate maximum spanning bivariate copula associated with bivariate dependence relations. The main advantage of the approach is that learning with empirical copula focuses on dependence relations among random variables, without the need to know the properties of individual variables as well as without the requirement to specify parametric family of entire underlying distribution for individual variables. Experiments on two real-application data sets show the effectiveness of the proposed method.  相似文献   

12.
The Bayesian method is widely used to identify a joint distribution, which is modeled by marginal distributions and a copula. The joint distribution can be identified by one-step procedure, which directly tests all candidate joint distributions, or by two-step procedure, which first identifies marginal distributions and then copula. The weight-based Bayesian method using two-step procedure and the Markov chain Monte Carlo (MCMC)-based Bayesian method using one-step and two-step procedures were recently developed. In this paper, the one-step weight-based Bayesian method and two-step MCMC-based Bayesian method using the parametric marginal distributions are proposed. Comparison studies among the Bayesian methods have not been thoroughly carried out. In this paper, the weight-based and MCMC-based Bayesian methods using one-step and two-step procedures are compared to see which Bayesian method accurately and efficiently identifies a correct joint distribution through simulation studies. It is validated that the two-step weight-based Bayesian method has the best performance.  相似文献   

13.
One way to model a dependence structure is through the copula function which is a mean to capture the dependence structure in the joint distribution of variables. Association measures such as Kendall’s tau or Spearman’s rho can be expressed as functionals of the copula. The dependence structure between two variables can be highly influenced by a covariate, and it is of real interest to know how this dependence structure changes with the value taken by the covariate. This motivates the need for introducing conditional copulas, and the associated conditional Kendall’s tau and Spearman’s rho association measures. After the introduction and motivation of these concepts, two nonparametric estimators for a conditional copula are proposed and discussed. Then nonparametric estimates for the conditional association measures are derived. A key issue is that these measures are now looked at as functions in the covariate. The performances of all estimators are investigated via a simulation study which also includes a data-driven algorithm for choosing the smoothing parameters. The usefulness of the methods is illustrated on two real data examples.  相似文献   

14.
Capital asset pricing model (CAPM) has become a fundamental tool in finance for assessing the cost of capital, risk management, portfolio diversification and other financial assets. It is generally believed that the market risks of the assets, often denoted by a beta coefficient, should change over time. In this paper, we model timevarying market betas in CAPM by a smooth transition regime switching CAPM with heteroscedasticity, which provides flexible nonlinear representation of market betas as well as flexible asymmetry and clustering in volatility. We also employ the quantile regression to investigate the nonlinear behavior in the market betas and volatility under various market conditions represented by different quantile levels. Parameter estimation is done by a Bayesian approach. Finally, we analyze some Dow Jones Industrial stocks to demonstrate our proposed models. The model selection method shows that the proposed smooth transition quantile CAPM?CGARCH model is strongly preferred over a sharp threshold transition and a symmetric CAPM?CGARCH model.  相似文献   

15.
In this study, an intelligent forecasting models-selection system for refining portfolio structural estimation is proposed selecting different forecasts time series models, as well as the contents’ trend with refining the risk-return matrices of components. Based on the four inference rules in intelligent selection mechanism, the support system seeks to find the appropriate model solutions satisfying the tracking for the behavior of indices prices in portfolio optimization. The feasibility of the system is verified with a practical simulation experiment. The experimental results show that, for all examined investment assets, the presented system is an efficient way of solving the portfolio internal structure change problem. In addition, we also find that the presented system can also be used as an alternative method for evaluating various forecasting models. By means of global major market as the empirical evidences of portfolio contents, it will show that the proposed system can serve as improving efficient frontier of a portfolio.  相似文献   

16.
The estimation and management of risk is an important and complex task faced by market regulators and financial institutions. Accurate and reliable quantitative measures of risk are needed to minimize undesirable effects on a given portfolio fromlarge fluctuations in market conditions. To accomplish this, a series of computational tools has beendesigned, implemented, and incorporated into MatRisk, an integratedenvironment for risk assessment developed in MATLAB. Besides standard measures, such as Value at Risk(VaR), the application includes other more sophisticated risk measures that address the inability of VaRproperly to characterize the structure of risk. Conditionalrisk measures can also be estimated for autoregressive models with heteroskedasticity, including some novel mixture models. These tools are illustrated with a comprehensive risk analysis of the Spanish IBEX35 stock index.  相似文献   

17.
Extreme value methods are widely used in financial applications such as risk analysis, forecasting and pricing models. One of the challenges with their application in finance is accounting for the temporal dependence between the observations, for example the stylised fact that financial time series exhibit volatility clustering. Various approaches have been proposed to capture the dependence. Commonly a two-stage approach is taken, where the volatility dependence is removed using a volatility model like a GARCH (or one of its many incarnations) followed by application of standard extreme value models to the assumed independent residual innovations.This study examines an alternative one stage approach, which makes parameter estimation and accounting for the associated uncertainties more straightforward than the two-stage approach. The location and scale parameters of the extreme value distribution are defined to follow a conditional autoregressive heteroscedasticity process. Essentially, the model implements GARCH volatility via the extreme value model parameters. Bayesian inference is used and implemented via Markov chain Monte Carlo, to permit all sources of uncertainty to be accounted for. The model is applied to both simulated and empirical data to demonstrate performance in extrapolating the extreme quantiles and quantifying the associated uncertainty.  相似文献   

18.
In this paper, we use Conditional Value-at-Risk (CVaR) to measure risk and adopt the methodology of nonparametric estimation to explore the mean–CVaR portfolio selection problem. First, we obtain the estimated calculation formula of CVaR by using the nonparametric estimation of the density of the loss function, and formulate two nonparametric mean–CVaR portfolio selection models based on two methods of bandwidth selection. Second, in both cases when short-selling is allowed and forbidden, we prove that the two nonparametric mean–CVaR models are convex optimization problems. Third, we show that when CVaR is solved for, the corresponding VaR can also be obtained as a by-product. Finally, we present a numerical example with Monte Carlo simulations to demonstrate the usefulness and effectiveness of our results, and compare our nonparametric method with the popular linear programming method.  相似文献   

19.
Mixtures of truncated basis functions have been recently proposed as a generalisation of mixtures of truncated exponentials and mixtures of polynomials for modelling univariate and conditional distributions in hybrid Bayesian networks. In this paper we analyse the problem of learning the parameters of marginal and conditional MoTBF densities when both prior knowledge and data are available. Incorporating prior knowledge provide a valuable tool for obtaining useful models, especially in domains of applications where data are costly or scarce, and prior knowledge is available from practitioners. We explore scenarios where the prior knowledge can be expressed as an MoTBF density that is afterwards combined with another MoTBF density estimated from the available data. The resulting model remains within the MoTBF class which is a convenient property from the point of view of inference in hybrid Bayesian networks. The performance of the proposed method is tested in a series of experiments carried out over synthetic and real data.  相似文献   

20.
Copulas have attracted significant attention in the recent literature for modeling multivariate observations. An important feature of copulas is that they enable us to specify the univariate marginal distributions and their joint behavior separately. The copula parameter captures the intrinsic dependence between the marginal variables and it can be estimated by parametric or semiparametric methods. For practical applications, the so called inference function for margins (IFM) method has emerged as the preferred fully parametric method because it is close to maximum likelihood (ML) in approach and is easier to implement. The purpose of this paper is to compare the ML and IFM methods with a semiparametric (SP) method that treats the univariate marginal distributions as unknown functions. In this paper, we consider the SP method proposed by Genest et al. [1995. A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika 82(3), 543-552], which has attracted considerable interest in the literature. The results of an extensive simulation study reported here show that the ML/IFM methods are nonrobust against misspecification of the marginal distributions, and that the SP method performs better than the ML and IFM methods, overall. A data example on household expenditure is used to illustrate the application of various data analytic methods for applying the SP method, and to compare and contrast the ML, IFM and SP methods. The main conclusion is that, in terms of statistical computations and data analysis, the SP method is better than ML and IFM methods when the marginal distributions are unknown which is almost always the case in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号