首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 569 毫秒
1.
Valuation of American options via basis functions   总被引:1,自引:0,他引:1  
After a brief review of recent developments in the pricing and hedging of American options, this paper modifies the basis function approach to adaptive control and neuro-dynamic programming, and applies it to develop: 1) nonparametric pricing formulas for actively traded American options and 2) simulation-based optimization strategies for complex over-the-counter options, whose optimal stopping problems are prohibitively difficult to solve numerically by standard backward induction algorithms because of the curse of dimensionality. An important issue in this approach is the choice of basis functions, for which some guidelines and their underlying theory are provided.  相似文献   

2.
Two obvious limitations exist for baseline kernel minimum squared error (KMSE): lack of sparseness of the solution and the ill-posed problem. Previous sparse methods for KMSE have overcome the second limitation using a regularization strategy, which introduces an increase in the computational cost to determine the regularization parameter. Hence, in this paper, a constructive sparse algorithm for KMSE (CS-KMSE) and its improved version (ICS-KMSE) are proposed which will simultaneously address the two limitations described above. CS-KMSE chooses the training samples that incur the largest reductions on the objective function as the significant nodes on the basis of the Householder transformation. In contrast with CS-KMSE, there is an additional replacement mechanism using Givens rotation in ICS-KMSE, which results in ICS-KMSE giving better performance than CS-KMSE in terms of sparseness. CS-KMSE and ICS-KMSE do not require the regularization parameter at all before they begin to choose significant nodes, which is beneficial since it saves on the model selection time. More importantly, CS-KMSE and ICS-KMSE terminate their procedures with an early stopping strategy that acts as an implicit regularization term, which avoids overfitting and curbs the sparse level on the solution of the baseline KMSE. Finally, in comparison with other algorithms, both ICS-KMSE and CS-KMSE have superior sparseness, and extensive comparisons confirm their effectiveness and feasibility.  相似文献   

3.
Regularization and Bayesian methods for system identification have been repopularized in the recent years, and proved to be competitive w.r.t. classical parametric approaches. In this paper we shall make an attempt to illustrate how the use of regularization in system identification has evolved over the years, starting from the early contributions both in the Automatic Control as well as Econometrics and Statistics literature. In particular we shall discuss some fundamental issues such as compound estimation problems and exchangeability which play and important role in regularization and Bayesian approaches, as also illustrated in early publications in Statistics. The historical and foundational issues will be given more emphasis (and space), at the expense of the more recent developments which are only briefly discussed. The main reason for such a choice is that, while the recent literature is readily available, and surveys have already been published on the subject, in the author’s opinion a clear link with past work had not been completely clarified.  相似文献   

4.
Intrigued by some recent results on impulse response estimation by kernel and nonparametric techniques, we revisit the old problem of transfer function estimation from input–output measurements. We formulate a classical regularization approach, focused on finite impulse response (FIR) models, and find that regularization is necessary to cope with the high variance problem. This basic, regularized least squares approach is then a focal point for interpreting other techniques, like Bayesian inference and Gaussian process regression. The main issue is how to determine a suitable regularization matrix (Bayesian prior or kernel). Several regularization matrices are provided and numerically evaluated on a data bank of test systems and data sets. Our findings based on the data bank are as follows. The classical regularization approach with carefully chosen regularization matrices shows slightly better accuracy and clearly better robustness in estimating the impulse response than the standard approach–the prediction error method/maximum likelihood (PEM/ML) approach. If the goal is to estimate a model of given order as well as possible, a low order model is often better estimated by the PEM/ML approach, and a higher order model is often better estimated by model reduction on a high order regularized FIR model estimated with careful regularization. Moreover, an optimal regularization matrix that minimizes the mean square error matrix is derived and studied. The importance of this result lies in that it gives the theoretical upper bound on the accuracy that can be achieved for this classical regularization approach.  相似文献   

5.
In this paper, a neural network approach is used to understand the effects of fabric features and plasma processing parameters on fabric surface wetting properties. In this approach, fourteen features characterizing woven structures and two plasma parameters are taken as input variables, and the water contact angle cosine and the capillarity height of woven fabrics as output variables. In order to reduce the complexity of the model and effectively learn the network structure from a small number of data, a fuzzy logic based method is used for selecting the most relevant parameters which are taken as input variables of the reduced neural network models. With these relevant parameters, we can effectively control the plasma treatment by selecting the most appropriate fabric materials. Two techniques are used for improving the generalization capability of neural networks: (i) early stopping and (ii) Bayesian regularization. A methodology for optimizing such models is described. The learning abilities and prediction capabilities of the neural net models are compared in terms of different statistical performance criteria. Moreover, a connection weight method is used to determine the relative importance of each input variable in the networks. The obtained results show that neural network models could predict the process performance with reasonable accuracy. However, the neural model trained using Bayesian regularization provides the best results. Thus, it can be concluded that Bayesian network promises to be a valuable quantitative tool to evaluate, understand, and predict woven fabric surface modification by atmospheric air-plasma treatment.  相似文献   

6.
Path-dependent options have become increasingly popular over the last few years, in particular in FX markets, because of the greater precision with which they allow investors to choose or avoid exposure to well-defined sources of risk. The goal of the paper is to exhibit the power of stochastic time changes and Laplace transform techniques in the evaluation and hedging of path-dependent options in the Black–Scholes–Merton setting. We illustrate these properties in the specific case of Asian options and continuously (de-) activating double-barrier options and show that in both cases, the pricing and, just as important, the hedging results are more accurate than the ones obtained through Monte Carlo simulations.  相似文献   

7.
Exploring constructive cascade networks   总被引:5,自引:0,他引:5  
Constructive algorithms have proved to be powerful methods for training feedforward neural networks. An important property of these algorithms is generalization. A series of empirical studies were performed to examine the effect of regularization on generalization in constructive cascade algorithms. It was found that the combination of early stopping and regularization resulted in better generalization than the use of early stopping alone. A cubic penalty term that greatly penalizes large weights was shown to be beneficial for generalization in cascade networks. An adaptive method of setting the regularization magnitude in constructive algorithms was introduced and shown to produce generalization results similar to those obtained with a fixed, user-optimized regularization setting. This adaptive method also resulted in the construction of smaller networks for more complex problems. The acasper algorithm, which incorporates the insights obtained from the empirical studies, was shown to have good generalization and network construction properties. This algorithm was compared to the cascade correlation algorithm on the Proben 1 and additional regression data sets.  相似文献   

8.
针对当前尚无建立简约高效语音识别系统标准方法的情形,提出了通过贝叶斯信息准则(Bayesian Information Criterion,BIC)中的权衡系数折中选择系统识别率与复杂度,利用改进的粒子群优化(Particle Swarm Optimization,PSO)算法优化声学模型拓扑结构,进而创建高效简约语音识别系统的新方法。TIDigits上的实验表明,与传统方法创建的同复杂度的基线系统相比,用该方法建立的新系统句子正确率提升了7.85%,与同识别率的基线系统相比,系统复杂度降低了51.4%,说明新系统能够以较低的复杂度获得较高的识别率。  相似文献   

9.
Empirical tests of the arbitrage pricing theory using measured variables rely on the accuracy of standard inferential theory in approximating the distribution of the estimated risk premiums and factor betas. The techniques employed thus far perform factor selection and model inference sequentially. Recent advances in Bayesian variable selection are adapted to an approximate factor model to investigate the role of measured economic variables in the pricing of securities. In finite samples, exact statistical inference is carried out using posterior distributions of functions of risk premiums and factor betas. The role of the panel dimensions in posterior inference is investigated. New empirical evidence is found of time-varying risk premiums with higher and more volatile expected compensation for bearing systematic risk during contraction phases. In addition, investors are rewarded for exposure to “Economic” risk.  相似文献   

10.
The spot markets often exhibit high and low volatilities that persist for a while. We classify the spot market volatility into two states: high and low and use the Markov chain theory to construct a Two-State volatility model for pricing and hedging Taiwan stock index options (TXO). Compared to binomial option pricing model, the Two-State model is more stable in convergence and faster in early periods of convergence but is much more time-consuming as the number of periods and computations extensively increase. The growth order of total node number for quadrinomial lattice is O(n 4) while it is O(n 2) for binomial lattice. Empirically, the Taiwan stock index has high-volatility = 42.85%, low-volatility = 17.39%, and probability of being in high-volatility state = 0.3487 over the in-sample period from 1/6/1990 to 04/30/2008 according to Markov chain. Using as large as 87,160 datasets of TXO covering out-of-sample period from 05/02/2008 to 03/17/2010 and strike prices from 3600 to 8700, we demonstrated that the Two-State volatility model has the most outstanding performance in high-volatility period as applying put options for pricing and hedging. However, to avoid the cost of taxes resulting from position changes, a longer-term (e.g. 10 day) hedge is more properly than a short-term (e.g. 5 day) hedge.  相似文献   

11.
First, the relationship between factor analysis (FA) and the well-known arbitrage pricing theory (APT) for financial market is discussed comparatively, with a number of to-be-improved problems listed. An overview is made from a unified perspective on the related studies in the literatures of statistics, control theory, signal processing, and neural networks. Next, we introduce the fundamentals of the Bayesian Ying Yang (BYY) system and the harmony learning principle. We further show that a specific case of the framework, called BYY independent state space (ISS) system, provides a general guide for systematically tackling various FA related learning tasks and the above to-be-improved problems for the APT analyses. Third, on various specific cases of the BYY ISS system in three typical architectures, adaptive algorithms, regularization methods and model selection criteria are provided for either or both of parameter learning with automated model selection and parameter learning followed by model selection. Finally, we introduce some other financial applications that are based on the underlying independent factors via the APT analyses.  相似文献   

12.
In this article, a neural regression (NR) model, which produces nonlinear coefficients of multiple regression model based on neural networks, is introduced to capture the option valuation’s nonlinear characteristics effectively. The traditional linear regression uses the least-squares estimator to estimate the coefficient of a linear regression and thus may only produce suboptimal solutions. However, Applying neural networks to forecast volatility in option pricing has increased in popularity in recent years since many studies have indicated that the conventional option pricing models are not accurate enough. Our proposed neural regression model devotes to evaluate option values to improve on the tracking error in the measurement of hedging capability. The NR model uses the variables introduced by the Black–Scholes Model and applies the multiple regressions (MR) model to re-price option values. It is worth noting that each corresponding weight coefficient in MR is constructed by a complete neural network rather than by a scalar value. By capturing the nonlinear behaviors of option pricing, our proposed NR model has lower tracking error and better hedging capability than the BS model and other studies.  相似文献   

13.
针对目前反向计算模型还无法实现对建筑室内边界对流换热量进行反向计算这一制约性差距,采用温度贡献率方法,将边界对流换热量与室内测点温度之间表示成因果关系的温度贡献因子矩阵,基于计算流体力学,将最小二乘与Tikhonov正则化方法相结合,建立依据室内数个测点的离散温度求解边界对流换热量的反问题数学模型。应用三维通风空腔和某建筑内一间办公室进行实验验证,模型求解值与实测值的均方根差均小于80%,结果表明反向计算模型可以准确对室内边界对流换热量进行求解。  相似文献   

14.
Forecasting the volatility of stock price index   总被引:1,自引:0,他引:1  
Accurate volatility forecasting is the core task in the risk management in which various portfolios’ pricing, hedging, and option strategies are exercised. Prior studies on stock market have primarily focused on estimation of stock price index by using financial time series models and data mining techniques. This paper proposes hybrid models with neural network and time series models for forecasting the volatility of stock price index in two view points: deviation and direction. It demonstrates the utility of the hybrid model for volatility forecasting. This model demonstrates the utility of the neural network forecasting combined with time series analysis for the financial goods.  相似文献   

15.
Bagging, boosting, rotation forest and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-classifiers. Boosting and rotation forest algorithms are considered stronger than bagging and random subspace methods on noise-free data. However, there are strong empirical indications that bagging and random subspace methods are much more robust than boosting and rotation forest in noisy settings. For this reason, in this work we built an ensemble of bagging, boosting, rotation forest and random subspace methods ensembles with 6 sub-classifiers in each one and then a voting methodology is used for the final prediction. We performed a comparison with simple bagging, boosting, rotation forest and random subspace methods ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique had better accuracy in most cases.  相似文献   

16.
This paper is concerned with the valuation of European continuous-installment options where the aim is to determine the initial premium given a constant installment payment plan. The distinctive feature of this pricing problem is the determination, along with the initial premium, of an optimal stopping boundary since the option holder has the right to stop making installment payments at any time before maturity. Given that the initial premium function of this option is governed by an inhomogeneous Black-Scholes partial differential equation, we can obtain two alternative characterizations of the European continuous-installment option pricing problem, for which no closed-form solution is available. First, we formulate the pricing problem as a free boundary problem and using the integral representation method, we derive integral expressions for both the initial premium and the optimal stopping boundary. Next, we use the linear complementarity formulation of the pricing problem for determining the initial premium and the early stopping curve implicitly with a finite difference scheme. Finally, the pricing problem is posed as an optimal stopping problem and then implemented by a Monte Carlo approach.  相似文献   

17.
Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.  相似文献   

18.
On global-local artificial neural networks for function approximation   总被引:1,自引:0,他引:1  
We present a hybrid radial basis function (RBF) sigmoid neural network with a three-step training algorithm that utilizes both global search and gradient descent training. The algorithm used is intended to identify global features of an input-output relationship before adding local detail to the approximating function. It aims to achieve efficient function approximation through the separate identification of aspects of a relationship that are expressed universally from those that vary only within particular regions of the input space. We test the effectiveness of our method using five regression tasks; four use synthetic datasets while the last problem uses real-world data on the wave overtopping of seawalls. It is shown that the hybrid architecture is often superior to architectures containing neurons of a single type in several ways: lower mean square errors are often achievable using fewer hidden neurons and with less need for regularization. Our global-local artificial neural network (GL-ANN) is also seen to compare favorably with both perceptron radial basis net and regression tree derived RBFs. A number of issues concerning the training of GL-ANNs are discussed: the use of regularization, the inclusion of a gradient descent optimization step, the choice of RBF spreads, model selection, and the development of appropriate stopping criteria.  相似文献   

19.
We show that a hierarchical Bayesian modeling approach allows us to perform regularization in sequential learning. We identify three inference levels within this hierarchy: model selection, parameter estimation, and noise estimation. In environments where data arrive sequentially, techniques such as cross validation to achieve regularization or model selection are not possible. The Bayesian approach, with extended Kalman filtering at the parameter estimation level, allows for regularization within a minimum variance framework. A multilayer perceptron is used to generate the extended Kalman filter nonlinear measurements mapping. We describe several algorithms at the noise estimation level that allow us to implement on-line regularization. We also show the theoretical links between adaptive noise estimation in extended Kalman filtering, multiple adaptive learning rates, and multiple smoothing regularization coefficients.  相似文献   

20.
针对循环冗余校验(CRC)准则在信道条件恶化时可能使译码出现较大迭代次数及错误的问题,提出了基于可靠度的迭代停止算法及重传算法。首先,每次迭代后,计算本次译码中间结果的可靠度,通过判断其是否达到阈值来实现迭代的提前结束;然后,将具有最大可靠度的中间结果保存并作为最终译码结果;最后,每次译码后,通过判断最大可靠度是否低于重传阈值来决定是否重传,通过至多3次传输的译码结果来计算最佳译码结果。仿真结果表明,在信噪比低于1.2 dB时,与CRC准则相比,迭代停止算法能在不增加迭代次数的基础上减少1或2个比特错误,重传算法能进一步减少至少2个比特错误,基于可靠度的算法可以实现更少的误比特数和迭代次数。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号