首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 74 毫秒
1.
In this paper, we use Conditional Value-at-Risk (CVaR) to measure risk and adopt the methodology of nonparametric estimation to explore the mean–CVaR portfolio selection problem. First, we obtain the estimated calculation formula of CVaR by using the nonparametric estimation of the density of the loss function, and formulate two nonparametric mean–CVaR portfolio selection models based on two methods of bandwidth selection. Second, in both cases when short-selling is allowed and forbidden, we prove that the two nonparametric mean–CVaR models are convex optimization problems. Third, we show that when CVaR is solved for, the corresponding VaR can also be obtained as a by-product. Finally, we present a numerical example with Monte Carlo simulations to demonstrate the usefulness and effectiveness of our results, and compare our nonparametric method with the popular linear programming method.  相似文献   

2.
Shared kernel models for class conditional density estimation   总被引:3,自引:0,他引:3  
We present probabilistic models which are suitable for class conditional density estimation and can be regarded as shared kernel models where sharing means that each kernel may contribute to the estimation of the conditional densities of an classes. We first propose a model that constitutes an adaptation of the classical radial basis function (RBF) network (with full sharing of kernels among classes) where the outputs represent class conditional densities. In the opposite direction is the approach of separate mixtures model where the density of each class is estimated using a separate mixture density (no sharing of kernels among classes). We present a general model that allows for the expression of intermediate cases where the degree of kernel sharing can be specified through an extra model parameter. This general model encompasses both the above mentioned models as special cases. In all proposed models the training process is treated as a maximum likelihood problem and expectation-maximization algorithms have been derived for adjusting the model parameters.  相似文献   

3.
Since the introduction of the Autoregressive Conditional Heteroscedasticity (ARCH) model of Engle [R. Engle, Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation, Econometrica 50 (1982) 987–1007], the literature of modelling the conditional second moment has become increasingly popular in the last two decades. Many extensions and alternate models of the original ARCH have been proposed in the literature aiming to capture the dynamics of volatility more accurately. Interestingly, the Quasi Maximum Likelihood Estimator (QMLE) with normal density is typically used to estimate the parameters in these models. As such, the higher moments of the underlying distribution are assumed to be the same as those of the normal distribution. However, various studies reveal that the higher moments, such as skewness and kurtosis of the distribution of financial returns are not likely to be the same as the normal distribution, and in some cases, they are not even constant over time. These have significant implications in risk management, especially in the calculation of Value-at-Risk (VaR) which focuses on the negative quantile of the return distribution. Failed to accurately capture the shape of the negative quantile would produce inaccurate measure of risk, and subsequently lead to misleading decision in risk management. This paper proposes a solution to model the distribution of financial returns more accurately by introducing a general framework to model the distribution of financial returns using maximum entropy density (MED). The main advantage of MED is that it provides a general framework to estimate the distribution function directly based on a given set of data, and it provides a convenient framework to model higher order moments up to any arbitrary finite order k. However this flexibility comes with a high cost in computational time as k increases, therefore this paper proposes an alternative model that would reduce computation time substantially. Moreover, the sensitivity of the parameters in the MED with respect to the dynamic changes of moments is derived analytically. This result is important as it relates the dynamic structure of the moments to the parameters in the MED. The usefulness of this approach will be demonstrated using 5 min intra-daily returns of the Euro/USD exchange rate.  相似文献   

4.
In biomedical research there is often interest in describing covariate distributions given different survival groups. This is not immediately available due to censoring. In this paper we develop an empirical estimate of the conditional covariate distribution under the proportional hazards regression model. We show that it converges weakly to a Gaussian process and provide its variance estimate. We then apply kernel smoothing to obtain an estimate of the corresponding density function. The density estimate is consistent and has the same rate of convergence as the classical kernel density estimator. We have developed an R package to implement our methodology, which is demonstrated through the Mayo Clinic primary biliary cirrhosis data.  相似文献   

5.
Density-weighted averaged derivative estimator gives a computationally convenient consistent and asymptotically normally (CAN) distributed estimate of the parametric component of a semiparametric single index model. This model includes some important parametric models as special cases such as linear regression, Logit/Probit, Tobit and Box–Cox and other transformation models. This estimator involves a nonparametric kernel density estimate and thus it faces the problem of bandwidth selection. A reasonable way of bandwidth selection for point estimation is one minimizing the mean squared error. Alternatively, for the purposes of hypothesis testing and confidence interval estimation, we may like to choose it such that it minimizes the normal approximation error. The purpose of this paper is to propose a new bandwidth suitable for these purposes by minimizing the normal approximation error in the tail of exact distribution of the statistics using higher order asymptotic theory of Edgeworth expansion or bootstrap method.  相似文献   

6.
An encompassing prior (EP) approach to facilitate Bayesian model selection for nested models with inequality constraints has been previously proposed. In this approach, samples are drawn from the prior and posterior distributions of an encompassing model that contains an inequality restricted version as a special case. The Bayes factor in favor of the inequality restriction then simplifies to the ratio of the proportions of posterior and prior samples consistent with the inequality restriction. This formalism has been applied almost exclusively to models with inequality or “about equality” constraints. It is shown that the EP approach naturally extends to exact equality constraints by considering the ratio of the heights for the posterior and prior distributions at the point that is subject to test (i.e., the Savage-Dickey density ratio). The EP approach generalizes the Savage-Dickey ratio method, and can accommodate both inequality and exact equality constraints. The general EP approach is found to be a computationally efficient procedure to calculate Bayes factors for nested models. However, the EP approach to exact equality constraints is vulnerable to the Borel-Kolmogorov paradox, the consequences of which warrant careful consideration.  相似文献   

7.
Different from conventional gradient-based neural dynamics, a special type of neural dynamics has been proposed by Zhang et al. for online solution of time-varying and/or static (or termed, time-invariant) problems. The design of Zhang dynamics (ZD) is based on the elimination of an indefinite error function, instead of the elimination of a square-based positive (or at least lower-bounded) energy function usually associated with gradient dynamics (GD). In this paper, we generalize, propose and investigate the continuous-time ZD model and its discrete-time models in two situations (i.e., the time-derivative of the coefficient being known or unknown) for time-varying cube root finding, including the complex-valued continuous-time ZD model for finding cube roots in complex domain. In addition, to find the static scalar-valued cube root, a simplified continuous-time ZD model and its discrete-time model are generated. By focusing on such a static problem solving, Newton-Raphson iteration is found to be a special case of the discrete-time ZD model by utilizing the linear activation function and fixing the step-size value to be 1. Computer-simulation and testing results demonstrate the efficacy of the proposed ZD models (including real-valued ZD models and complex-valued ZD models) for time-varying and static cube root finding, as well as the link and new explanation to Newton-Raphson iteration.  相似文献   

8.
In this paper we introduce the Birnbaum–Saunders autoregressive conditional duration (BS-ACD) model as an alternative to the existing ACD models which allow a unimodal hazard function. The BS-ACD model is the first ACD model to integrate the concept of conditional quantile estimation into an ACD model by specifying the time-varying model dynamics in terms of the conditional median duration, instead of the conditional mean duration. In the first half of this paper we illustrate how the BS-ACD model relates to the traditional ACD model, and in the second half we discuss the assessment of goodness-of-fit for ACD models in general. In order to facilitate both of these points, we explicitly illustrate the similarities and differences between the BS-ACD model and the Generalized Gamma ACD (GG-ACD) model by comparing and contrasting their formulation, estimation, and results from fitting both models to samples for six NYSE securities.  相似文献   

9.

In software maintenance, after modifying the software a system needs regression testing. Execution of regression testing confirms that any modified code has no adverse effect as well as does not introduce new faults in the existing functionality of the software. When working with object-oriented programming code-based testing is generally expensive. In this study, we proposed a technique for regression testing using unified modeling language (UML) diagrams and code-based analysis for object-oriented software. In this research work, the design and code based technique with an evolutionary approach are presented to select the best possible test cases from the test suite. We used the dependency graph for intermediate representation for the objectoriented program to identify the change. The selection of test cases is done at the design level using the UML model. The models are compared to identify the change between these two models. The proposed approached maximizes the value of APFD.

  相似文献   

10.
In this study, a model identification instrument to determine the variance component structure for generalized linear mixed models (glmms) is developed based on the conditional Akaike information (cai). In particular, an asymptotically unbiased estimator of the cai (denoted as caicc) is derived as the model selection criterion which takes the estimation uncertainty in the variance component parameters into consideration. The relationship between bias correction and generalized degree of freedom for glmms is also explored. Simulation results show that the estimator performs well. The proposed criterion demonstrates a high proportion of correct model identification for glmms. Two sets of real data (epilepsy seizure count data and polio incidence data) are used to illustrate the proposed model identification method.  相似文献   

11.
In this paper, we establish an economic production quantity model for a manufacturer (or wholesaler) with defective items when its supplier offers an up-stream trade credit M while it in turn provides its buyers (or retailers) a down-stream trade credit N. The proposed model is in a general framework that includes numerous previous models as special cases. In contrast to the traditional differential calculus approach, we use a simple-to-understand and easy-to-apply arithmetic–geometric inequality method to find the optimal solution. Furthermore, we provide some theoretical results to characterize the optimal solution. Finally, several numerical examples are presented to illustrate the proposed model and the optimal solution.  相似文献   

12.
In this paper we introduce a class of fuzzy clusterwise regression models with LR fuzzy response variable and numeric explanatory variables, which embodies fuzzy clustering, into a fuzzy regression framework. The model bypasses the heterogeneity problem that could arise in fuzzy regression by subdividing the dataset into homogeneous clusters and performing separate fuzzy regression on each cluster. The integration of the clustering model into the regression framework allows us to simultaneously estimate the regression parameters and the membership degree of each observation to each cluster by optimizing a single objective function. The class of models proposed here includes, as special cases, the fuzzy clusterwise linear regression model and the fuzzy clusterwise polynomial regression model. We also introduce a set of goodness of fit indices to evaluate the fit of the regression model within each cluster as well as in the whole dataset. Finally, we consider some cluster validity criteria that are useful in identifying the “optimal” number of clusters. Several applications are provided in order to illustrate the approach.  相似文献   

13.
支持向量机作为非参数方法已经广泛应用于信用评估领域.为克服其训练高维数据不能主动进行特征选择导致准确率下降的缺点,构建C4.5决策树优化支持向量机的信用评估模型.利用C4.5信息熵增益率方法进行属性选择,减少冗余属性.模型通过网格搜索确定最优参数,使用F-score和平均准确率评价模型性能,并在两组公开数据集上进行验证.实证分析表明,C4.5决策树优化支持向量机的信用评估模型有效减少了数据学习量,较于传统各类单一模型有较高的分类准确率和实用性.  相似文献   

14.
混合模型成份数估计是医学图像聚类分析和密度估计的关键。针对基于信息准则的佑计方法存在过拟合问题,提出了一种新的基于高斯混合模型特征函数的估计方法。首先定义医学图像高斯混合模型的特征函数,然后构造了一个基于特征函数的混合模型成份佑计准则,最后设计了该准则的实现算法。新的估计方法通过选择合适的参数调控对数特征函数,让惩罚函数起到平衡作用。模拟数据和真实数据实验表明,此方法确定的混合模型的成份数K比其他经典的信息准则方法确定的更合理,避免了医学图像的过拟合问题。  相似文献   

15.
A conditional density function, which describes the relationship between response and explanatory variables, plays an important role in many analysis problems. In this paper, we propose a new kernel-based parametric method to estimate conditional density. An exponential function is employed to approximate the unknown density, and its parameters are computed from the given explanatory variable via a nonlinear mapping using kernel principal component analysis (KPCA). We develop a new kernel function, which is a variant to polynomial kernels, to be used in KPCA. The proposed method is compared with the Nadaraya-Watson estimator through numerical simulation and practical data. Experimental results show that the proposed method outperforms the Nadaraya-Watson estimator in terms of revised mean integrated squared error (RMISE). Therefore, the proposed method is an effective method for estimating the conditional densities.  相似文献   

16.
In this paper, we extend Goyal's economic order quantity (EOQ) model to allow for the following four important facts: (1) the manufacturer's selling price per unit is necessarily higher than its unit cost, (2) the interest rate charged by a bank is not necessarily higher than the manufacturer's investment return rate, (3) the demand rate is a downward‐sloping function of the price, and (4) an economic production quantity (EPQ) model is a generalized EOQ model. We then establish an appropriate EPQ model accordingly, in which the manufacturer receives the supplier trade credit and provides the customer trade credit simultaneously. As a result, the proposed model is in a general framework that includes numerous previous models as special cases. Furthermore, we provide an easy‐to‐use closed‐form optimal solution to the problem for any given price. Finally, we develop an algorithm for the manufacturer to determine its optimal price and lot size simultaneously.  相似文献   

17.
Many wavelet-based algorithms have been proposed in recent years to solve the problem of function estimation from noisy samples. In particular it has been shown that threshold approaches lead to asymptotically optimal estimation and are extremely effective when dealing with real data. Working under a Bayesian perspective, in this paper we first study optimality of the hard and soft thresholding rules when the function is modelled as a stochastic process with known covariance function. Next, we consider the case where the covariance function is unknown, and propose a novel approach that models the covariance as a certain wavelet combination estimated from data by Bayesian model selection. Simulated data are used to show that the new method outperforms traditional threshold approaches as well as other wavelet-based Bayesian techniques proposed in the literature.  相似文献   

18.
A new nonparametric estimator for the conditional hazard rate is proposed, which is defined as the ratio of local linear estimators for the conditional density and survivor function. The resulting hazard rate estimator is shown to be pointwise consistent and asymptotically normally distributed under appropriate conditions. Furthermore, plug-in bandwidths based on normal and uniform reference distributions and minimizing the asymptotic mean squared error are derived. In terms of the mean squared error the new estimator is highly competitive in comparison to existing estimators for the conditional hazard rate. Moreover, its smoothing parameters are relatively robust to misspecification of the reference distributions, which facilitates bandwidth selection. Additionally, the new hazard rate estimator is conveniently calculated using standard software for local linear regression. The use of the local linear hazard rate is illustrated in an application to kidney transplant data.  相似文献   

19.
In this paper, a multidimensional 0–1 knapsack model with fuzzy parameters is defuzzified using triangular norm (t-norm) and t-conorm fuzzy relations. In the first part of the paper, the surrogate relaxation models of the defuzzified models are developed, and the use of surrogate constraint normalization rules is proposed as the surrogate multipliers. A methodology is proposed to evaluate some surrogate constraint normalization rules proposed in the literature as well as one rule proposed in this paper. Three distance metrics are used to find the distance of fuzzy objective function from the surrogate models to the distance of fuzzy objective function from the original models. A numerical experiment shows that the rule proposed in this paper dominates the other rules considered in this paper for three distance metrics given the whole assumptions. In the second part of the paper, a methodology is proposed for multi-attribute project portfolio selection, and optimal solutions from the original defuzzified models as well as near-optimal solutions from their surrogate relaxation models are considered as alternatives. The aggregation of evaluation results is managed using a simple yet effective method so-called fuzzy Simple Additive Weighting (SAW) method. Then, the methodology is applied to a hypothetical construction project portfolio selection problem with multiple attributes.  相似文献   

20.
Besides optimizing classifier predictive performance and addressing the curse of the dimensionality problem, feature selection techniques support a classification model as simple as possible. In this paper, we present a wrapper feature selection approach based on Bat Algorithm (BA) and Optimum-Path Forest (OPF), in which we model the problem of feature selection as an binary-based optimization technique, guided by BA using the OPF accuracy over a validating set as the fitness function to be maximized. Moreover, we present a methodology to better estimate the quality of the reduced feature set. Experiments conducted over six public datasets demonstrated that the proposed approach provides statistically significant more compact sets and, in some cases, it can indeed improve the classification effectiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号