首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper builds on previous research and seeks to determine whether improvements can be achieved in the forecasting of oil price volatility by using a hybrid model and incorporating financial variables. The main conclusion is that the hybrid model increases the volatility forecasting precision by 30% over previous models as measured by a heteroscedasticity-adjusted mean squared error (HMSE) model. Key financial variables included in the model that improved the prediction are the Euro/Dollar and Yen/Dollar exchange rates, and the DJIA and FTSE stock market indexes.  相似文献   

2.
刘进锋  郭雷 《微型机与应用》2011,30(18):69-71,75
基于CUDA架构在GPU上实现了神经网络前向传播算法,该算法利用神经网络各层内神经元计算的并行性,每层使用一个Kernel函数来并行计算该层神经元的值,每个Kernel函数都根据神经网络的特性和CUDA架构的特点进行优化。实验表明,该算法比普通的CPU上的算法快了约7倍。研究结果对于提高神经网络的运算速度以及CUDA的适用场合都有参考价值。  相似文献   

3.
The Black–Scholes (BS) model is the standard approach used for pricing financial options. However, although being theoretically strong, option prices valued by the model often differ from the prices observed in the financial markets. This paper applies a hybrid neural network which preprocesses financial input data for improving the estimation of option market prices. This model is comprised of two parts. The first part is a neural network developed to estimate volatility. The second part is an additional neural network developed to value the difference between the BS model results and the actual market option prices. The resulting option price is then a summation between the BS model and the network response. The hybrid system with a neural network for estimating volatility provides better performance in terms of pricing accuracy than either the BS model with historical volatility (HV), or the BS model with volatility valued by the neural network.  相似文献   

4.
Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.  相似文献   

5.
Neural nets' usefulness for forecasting is limited by problems of overfitting and the lack of rigorous procedures for model identification, selection and adequacy testing. This paper describes a methodology for neural model misspecification testing. We introduce a generalization of the Durbin-Watson statistic for neural regression and discuss the general issues of misspecification testing using residual analysis. We derive a generalized influence matrix for neural estimators which enables us to evaluate the distribution of the statistic. We deploy Monte Carlo simulation to compare the power of the test for neural and linear regressors. While residual testing is not a sufficient condition for model adequacy, it is nevertheless a necessary condition to demonstrate that the model is a good approximation to the data generating process, particularly as neural-network estimation procedures are susceptible to partial convergence. The work is also an important step toward developing rigorous procedures for neural model identification, selection and adequacy testing which have started to appear in the literature. We demonstrate its applicability in the nontrivial problem of forecasting implied volatility innovations using high-frequency stock index options. Each step of the model building process is validated using statistical tests to verify variable significance and model adequacy with the results confirming the presence of nonlinear relationships in implied volatility innovations  相似文献   

6.
Financial volatility trading using recurrent neural networks   总被引:2,自引:0,他引:2  
We simulate daily trading of straddles on financial indexes. The straddles are traded based on predictions of daily volatility differences in the indexes. The main predictive models studied are recurrent neural nets (RNN). Such applications have often been studied in isolation. However, due to the special character of daily financial time-series, it is difficult to make full use of RNN representational power. Recurrent networks either tend to overestimate noisy data, or behave like finite-memory sources with shallow memory; they hardly beat classical fixed-order Markov models. To overcome data nonstationarity, we use a special technique that combines sophisticated models fitted on a larger data set, with a fixed set of simple-minded symbolic predictors using only recent inputs. Finally, we compare our predictors with the GARCH family of econometric models designed to capture time-dependent volatility structure in financial returns. GARCH models have been used to trade volatility. Experimental results show that while GARCH models cannot generate any significantly positive profit, by careful use of recurrent networks or Markov models, the market makers can generate a statistically significant excess profit, but then there is no reason to prefer RNN over much more simple and straightforward Markov models. We argue that any report containing RNN results on financial tasks should be accompanied by results achieved by simple finite-memory sources combined with simple techniques to fight nonstationarity in the data.  相似文献   

7.
为提高图像处理领域协方差矩阵的计算效率,满足其在实时要求下的应用,借助GPU通用计算技术,结合CUDA编程模型,对协方差矩阵的计算进行有针对性的并行化优化,设计并实现一种高效的并行图像协方差矩阵算法。为在通用PC平台上使用协方差矩阵并满足实时性需求的各种图像处理应用提供了一个可行的解决方法,对其它领域涉及到协方差矩阵的实时计算也有良好的借鉴作用。与原有的CPU实现方法相比,GPU的效率有了平均数千倍的提升。  相似文献   

8.
This paper is concerned with the overall design of the Terabit File Store — a network storage facility based on a Braegen Automated Tape Library. The characteristics of the tape library — in particular its large capacity and slow access — provide both challenges and opportunities for the system designer. The use of disc cache and the optimization of file placement have been used to provide reasonable performance in the face of substantial tape handling times. Catalogue facilities have been tailored to cater for the support of large file holdings and user file back-up applications. It has been possible to automate the back-up of essential file-store information which together with automatic integrity checks, helps to minimize the damage that can be caused by faults. Considerable attention has been given to facilities that automate much of the management of the system in a network environment. Other aspects of the design discussed in the paper include protection, housekeeping and host interfacing.  相似文献   

9.
Stock prices as time series are non-stationary and highly-noisy due to the fact that stock markets are affected by a variety of factors. Predicting stock price or index with the noisy data directly is usually subject to large errors. In this paper, we propose a new approach to forecasting the stock prices via the Wavelet De-noising-based Back Propagation (WDBP) neural network. An effective algorithm for predicting the stock prices is developed. The monthly closing price data with the Shanghai Composite Index from January 1993 to December 2009 are used to illustrate the application of the WDBP neural network based algorithm in predicting the stock index. To show the advantage of this new approach for stock index forecast, the WDBP neural network is compared with the single Back Propagation (BP) neural network using the real data set.  相似文献   

10.
Multimedia Tools and Applications - Visual speech recognition is a method that comprehends speech from speakers lip movements and the speech is validated only by the shape and lip movement....  相似文献   

11.
This study applies backpropagation neural network for forecasting TXO price under different volatility models, including historical volatility, implied volatility, deterministic volatility function, GARCH and GM-GARCH models. The sample period runs from 2008 to 2009, and thus contains the global financial crisis stating in October 2008. Besides RMSE, MAE and MAPE, this study introduces the best forecasting performance ratio (BFPR) as a new performance measure for use in option pricing. The analytical result reveals that forecasting performances are related to the moneynesses, volatility models and number of neurons in the hidden layer, but are not significantly related to activation functions. Implied and deterministic volatility function models have the largest and second largest BFPR regardless of moneyness. Particularly, the forecasting performance in 2008 was significantly inferior to that in 2009, demonstrating that the global financial crisis during October 2008 may have strongly influenced option pricing performance.  相似文献   

12.
This paper presents an effective scheme for clustering a huge data set using a PC cluster system, in which each PC is equipped with a commodity programmable graphics processing unit (GPU). The proposed scheme is devised to achieve three-level hierarchical parallel processing of massive data clustering. The divide-and-conquer approach to parallel data clustering is employed to perform the coarse-grain parallel processing by multiple PCs with a message passing mechanism. By taking advantage of the GPU’s parallel processing capability, moreover, the proposed scheme can exploit two types of the fine-grain data parallelism at the different levels in the nearest neighbor search, which is the most computationally-intensive part of the data-clustering process. The performance of our scheme is discussed in comparison with that of the implementation entirely running on CPU. Experimental results clearly show that the proposed hierarchial parallel processing can remarkably accelerate the data clustering task. Especially, GPU co-processing is quite effective to improve the computational efficiency of parallel data clustering on a PC cluster. Although data-transfer from GPU to CPU is generally costly, acceleration by GPU co-processing is significant to save the total execution time of data-clustering.  相似文献   

13.
A toroidal lattice architecture (TLA) and a planar lattice architecture (PLA) are proposed as massively parallel neurocomputer architectures for large-scale simulations. The performance of these architectures is almost proportional to the number of node processors, and they adopt the most efficient two-dimensional processor connections for WSI implementation. They also give a solution to the connectivity problem, the performance degradation caused by the data transmission bottleneck, and the load balancing problem for efficient parallel processing in large-scale neural network simulations. The general neuron model is defined. Implementation of the TLA with transputers is described. A Hopfield neural network and a multilayer perceptron have been implemented and applied to the traveling salesman problem and to identity mapping, respectively. Proof that the performance increases almost in proportion to the number of node processors is given.  相似文献   

14.
Small-time scale network traffic prediction based on flexible neural tree   总被引:2,自引:0,他引:2  
In this paper, the flexible neural tree (FNT) model is employed to predict the small-time scale traffic measurements data. Based on the pre-defined instruction/operator sets, the FNT model can be created and evolved. This framework allows input variables selection, over-layer connections and different activation functions for the various nodes involved. The FNT structure is developed using the Genetic Programming (GP) and the parameters are optimized by the Particle Swarm Optimization algorithm (PSO). The experimental results indicate that the proposed method is efficient for forecasting small-time scale traffic measurements and can reproduce the statistical features of real traffic measurements. We also compare the performance of the FNT model with the feed-forward neural network optimized by PSO for the same problem.  相似文献   

15.
A new method to construct nonparametric prediction intervals for nonlinear time series data is proposed. Within the framework of the recently developed sieve bootstrap, the new approach employs neural network models to approximate the original nonlinear process. The method is flexible and easy to implement as a standard residual bootstrap scheme while retaining the advantage of being a nonparametric technique. It is model-free within a general class of nonlinear processes and avoids the specification of a finite dimensional model for the data generating process. The results of a Monte Carlo study are reported in order to investigate the finite sample performances of the proposed procedure.  相似文献   

16.
Li  Wei  Lei  Zhou  Yuan  Junqing  Luo  Haonan  Xu  Qingzheng 《Applied Intelligence》2021,51(7):4984-5006
Applied Intelligence - Competitive swarm optimizer (CSO) has been shown to be an effective optimization algorithm for large scale optimization. However, the learning strategy of a loser particle...  相似文献   

17.
In this paper a general fuzzy hyperline segment neural network is proposed [P.M. Patil, Pattern classification and clustering using fuzzy neural networks, Ph.D. Thesis, SRTMU, Nanded, India, January 2003]. It combines supervised and unsupervised learning in a single algorithm so that it can be used for pure classification, pure clustering and hybrid classification/clustering. The method is applied to handwritten Devanagari numeral character recognition and also to the Fisher Iris database. High recognition rates are achieved with less training and recall time per pattern. The algorithm is rotation, scale and translation invariant. The recognition rate with ring data features is found to be 99.5%.  相似文献   

18.
Recently, cellular neural networks (CNNs) have been demonstrated to be a highly effective paradigm applicable in a wide range of areas. Typically, CNNs can be implemented using VLSI circuits, but this would unavoidably require additional hardware. On the other hand, we can also implement CNNs purely by software; this, however, would result in very low performance when given a large CNN problem size. Nowadays, conventional desktop computers are usually equipped with programmable graphics processing units (GPUs) that can support parallel data processing. This paper introduces a GPU-based CNN simulator. In detail, we carefully organize the CNN data as 4-channel textures, and efficiently implement the CNN computation as fragment programs running in parallel on a GPU. In this way, we can create a high performance but low-cost CNN simulator. Experimentally, we demonstrate that the resultant GPU-based CNN simulator can run 8–17 times faster than a CPU-based CNN simulator.  相似文献   

19.
Recently, General Purpose Graphical Processing Units (GP-GPUs) have been identified as an intriguing technology to accelerate numerous data-parallel algorithms. Several GPU architectures and programming models are beginning to emerge and establish their niche in the High-Performance Computing (HPC) community. New massively parallel architectures such as the Nvidia??s Fermi and AMD/ATi??s Radeon pack tremendous computing power in their large number of multiprocessors. Their performance is unleashed using one of the two GP-GPU programming models: Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL). Both of them offer constructs and features that have direct bearing on the application runtime performance. In this paper, we compare the two GP-GPU architectures and the two programming models using a two-level character recognition network. The two-level network is developed using four different Spiking Neural Network (SNN) models, each with different ratios of computation-to-communication requirements. To compare the architectures, we have chosen the two extremes of the SNN models for implementation of the aforementioned two-level network. An architectural performance comparison of the SNN application running on Nvidia??s Fermi and AMD/ATi??s Radeon is done using the OpenCL programming model exhausting all of the optimization strategies plausible for the two architectures. To compare the programming models, we implement the two-level network on Nvidia??s Tesla C2050 based on the Fermi architecture. We present a hierarchy of implementations, where we successively add optimization techniques associated with the two programming models. We then compare the two programming models at these different levels of implementation and also present the effect of the network size (problem size) on the performance. We report significant application speed-up, as high as 1095× for the most computation intensive SNN neuron model, against a serial implementation on the Intel Core 2 Quad host. A comprehensive study presented in this paper establishes connections between programming models, architectures and applications.  相似文献   

20.
One of the challenging problems in forecasting the conditional volatility of stock market returns is that general kernel functions in support vector machine (SVM) cannot capture the cluster feature of volatility accurately. While wavelet function yields features that describe of the volatility time series both at various locations and at varying time granularities, so this paper construct a multidimensional wavelet kernel function and prove it meeting the mercer condition to address this problem. The applicability and validity of wavelet support vector machine (WSVM) for volatility forecasting are confirmed through computer simulations and experiments on real-world stock data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号