首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There is substantial evidence that many financial time series exhibit leptokurtosis and volatility clustering. We compare the two most commonly used statistical distributions in empirical analysis to capture these features: the t distribution and the generalized error distribution (GED). A Bayesian approach using a reversible-jump Markov chain Monte Carlo method and a forecasting evaluation method are adopted for the comparison. In the Bayesian evaluation of eight daily market returns, we find that the fitted t error distribution outperforms the GED. In terms of volatility forecasting, models with t innovations also demonstrate superior out-of-sample performance.  相似文献   

2.
In the paper we investigate experimentally the feasibility of rough sets in building profitable trend prediction models for financial time series. In order to improve the decision process for long time series, a novel time-weighted rule voting method, which accounts for information aging, is proposed. The experiments have been performed using market data of multiple stock market indices. The classification efficiency and financial performance of the proposed rough sets models was verified and compared with that of support vector machines models and reference financial indices. The results showed that the rough sets approach with time weighted rule voting outperforms the classical rough sets and support vector machines decision systems and is profitable as compared to the buy and hold strategy. In addition, with the use of variable precision rough sets, the effectiveness of generated trading signals was further improved.  相似文献   

3.
This paper presents the theory, methodology and application of a new predictive model for time series within the financial sector, specifically data from 20 companies listed on the U.S. stock exchange market. The main impact of this article is (1) the proposal of a recommender system for financial investment to increase the cumulative gain; (2) an artificial predictor that beats the market in most cases; and (3) the fact that, to the best of our knowledge, this is the first effort to predict time series by learning redundant dictionaries to sparsely reconstruct these signals. The methodology is conducted by finding the optimal set of predicting model atoms through two directions for dictionaries generation: the first one by extracting atoms from past daily return price values in order to build untrained dictionaries; and the second one, by atom extraction followed by training of dictionaries though K-SVD. Prediction of financial time series is a periodic process where each cycle consists of two stages: (1) training of the model to learn the dictionary that maximizes the probability of occurrence of an observation sequence of return values, (2) prediction of the return value for the next coming trading day. The motivation for such research is the fact that a tool, which might generate confidence of the potential benefits obtained from using formal financial services, would encourage more participation in a formal system such as the stock market. Theory, issues, challenges and results related to the application of sparse representation to the prediction of financial time series, as well as the performance of the method, are presented.  相似文献   

4.
Market making (MM) strategies have played an important role in the electronic stock market. However, the MM strategies without any forecasting power are not safe while trading. In this paper, we design and implement a twotier framework, which includes a trading signal generator based on a supervised learning approach and an event-driven MM strategy. The proposed generator incorporates the information within order book microstructure and market news to provide directional predictions. The MM strategy in the second tier trades on the signals and prevents itself from profit loss led by market trending. Using half a year price tick data from Tokyo Stock Exchange (TSE) and Shanghai Stock Exchange (SSE), and corresponding Thomson Reuters news of the same time period, we conduct the back-testing and simulation on an industrial near-to-reality simulator. From the empirical results, we find that 1) strategies with signals perform better than strategies without any signal in terms of average daily profit and loss (PnL) and sharpe ratio (SR), and 2) correct predictions do help MM strategies readjust their quoting along with market trending, which avoids the strategies triggering stop loss procedure that further realizes the paper loss.  相似文献   

5.
Currently FOREX (foreign exchange market) is the largest financial market over the world. Usually the Forex market analysis is based on the Forex time series prediction. Nevertheless, trading expert systems based on such predictions do not usually provide satisfactory results. On the other hand, stock trading expert systems called also “mechanical trading systems”, which are based on the technical analysis, are very popular and may provide good profits. Therefore, in this paper we propose a Forex trading expert system based on some new technical analysis indicators and a new approach to the rule-base evidential reasoning (RBER) (the synthesis of fuzzy logic and the Dempster–Shafer theory of evidence). We have found that the traditional fuzzy logic rules lose an important information, when dealing with the intersecting fuzzy classes, e.g., such as Low and Medium and we have shown that this property may lead to the controversial results in practice. In the framework of the proposed in the current paper new approach, an information of the values of all membership functions representing the intersecting (competing) fuzzy classes is preserved and used in the fuzzy logic rules. The advantages of the proposed approach are demonstrated using the developed expert system optimized and tested on the real data from the Forex market for the four currency pairs and the time frames 15 m, 30 m, 1 h and 4 h.  相似文献   

6.
This paper describes the use of a genetic algorithm (GA) to find optimal parameter-values for trading agents that operate in virtual online auction ‘e-marketplaces’, where the rules of those marketplaces are also under simultaneous control of the GA. The aim is to use the GA to automatically design new mechanisms for agent-based e-marketplaces that are more efficient than online markets designed by (or populated by) humans. The space of possible auction-types explored by the GA includes the continuous double auction (CDA) mechanism (as used in most of the world’s financial exchanges), and also two purely one-sided mechanisms. Surprisingly, the GA did not always settle on the CDA as an optimum. Instead, novel hybrid auction mechanisms were evolved, which are unlike any existing market mechanisms. In this paper we show that, when the market supply and demand schedules undergo sudden ‘shock’ changes partway through the evaluation process, two-sided hybrid market mechanisms can evolve which may be unlike any human-designed auction and yet may also be significantly more efficient than any human designed market mechanism.  相似文献   

7.
Stock market prediction is regarded as a challenging task in financial time-series forecasting. The central idea to successful stock market prediction is achieving best results using minimum required input data and the least complex stock market model. To achieve these purposes this article presents an integrated approach based on genetic fuzzy systems (GFS) and artificial neural networks (ANN) for constructing a stock price forecasting expert system. At first, we use stepwise regression analysis (SRA) to determine factors which have most influence on stock prices. At the next stage we divide our raw data into k clusters by means of self-organizing map (SOM) neural networks. Finally, all clusters will be fed into independent GFS models with the ability of rule base extraction and data base tuning. We evaluate capability of the proposed approach by applying it on stock price data gathered from IT and Airlines sectors, and compare the outcomes with previous stock price forecasting methods using mean absolute percentage error (MAPE). Results show that the proposed approach outperforms all previous methods, so it can be considered as a suitable tool for stock price forecasting problems.  相似文献   

8.
Credit classification is an important component of critical financial decision making tasks such as credit scoring and bankruptcy prediction. Credit classification methods are usually evaluated in terms of their accuracy, interpretability, and computational efficiency. In this paper, we propose an approach for automatic designing of fuzzy rule-based classifiers (FRBCs) from financial data using multi-objective evolutionary optimization algorithms (MOEOAs). Our method generates, in a single experiment, an optimized collection of solutions (financial FRBCs) characterized by various levels of accuracy-interpretability trade-off. In our approach we address the complexity- and semantics-related interpretability issues, we introduce original genetic operators for the classifier's rule base processing, and we implement our ideas in the context of Non-dominated Sorting Genetic Algorithm II (NSGA-II), i.e., one of the presently most advanced MOEOAs. A significant part of the paper is devoted to an extensive comparative analysis of our approach and 24 alternative methods applied to three standard financial benchmark data sets, i.e., Statlog (Australian Credit Approval), Statlog (German Credit Approval), and Credit Approval (also referred to as Japanese Credit) sets available from the UCI repository of machine learning databases (http://archive.ics.uci.edu/ml). Several performance measures including accuracy, sensitivity, specificity, and some number of interpretability measures are employed in order to evaluate the obtained systems. Our approach significantly outperforms the alternative methods in terms of the interpretability of the obtained financial data classifiers while remaining either competitive or superior in terms of their accuracy and the speed of decision making.  相似文献   

9.
The sequential auction problem is commonplace in open, electronic marketplaces such as eBay. This is the problem where a buyer has no dominant strategy in bidding across multiple auctions when the buyer would have a simple, truth-revealing strategy if there was but a single auction event. Our model allows for multiple, distinct goods and market dynamics with buyers and sellers that arrive over time. Sellers each bring a single unit of a good to the market while buyers can have values on bundles of goods. We model each individual auction as a second-price (Vickrey) auction and propose an options-based, proxied solution to provide price and winner-determination coordination across auctions. While still allowing for temporally uncoordinated market participation, this options-based approach solves the sequential auction problem and provides truthful bidding as a weakly dominant strategy for buyers. An empirical study suggests that this coordination can enable a significant efficiency and revenue improvement over the current eBay market design, and highlights the effect on performance of complex buyer valuations (buyers with substitutes and complements valuations) and varying the market liquidity.  相似文献   

10.
Because it operates under a strict time constraint, query processing for data streams should be continuous and rapid. To guarantee this constraint, most previous researches optimize the evaluation order of multiple join operations in a set of continuous queries using a greedy optimization strategy so that the order is re-optimized dynamically in run-time due to the time-varying characteristics of data streams. However, this method often results in a sub-optimal plan because the greedy strategy traces only the first promising plan. This paper proposes a new multiple query optimization approach, Adaptive Sharing-based Extended Greedy Optimization Approach (A-SEGO), that traces multiple promising partial plans simultaneously. A-SEGO presents a novel method for sharing the results of common sub-expressions in a set of queries cost-effectively. The number of partial plans can be flexibly controlled according to the query processing workload. In addition, to avoid invoking the optimization process too frequently, optimization is performed only when the current execution plan is relatively no longer efficient. A series of experiments are comparatively analyzed to evaluate the performance of the proposed method in various stream environments.  相似文献   

11.
The interpretative approach to compilation allows compiling programs by partially evaluating an interpreter w.r.t. a source program. This approach, though very attractive in principle, has not been widely applied in practice mainly because of the difficulty in finding a partial evaluation strategy which always obtain “quality” compiled programs. In spite of this, in recent work we have performed a proof of concept of that, at least for some examples, this approach can be applied to decompile Java bytecode into Prolog. This allows applying existing advanced tools for analysis of logic programs in order to verify Java bytecode. However, successful partial evaluation of an interpreter for (a realistic subset of) Java bytecode is a rather challenging problem. The aim of this work is to improve the performance of the decompilation process above in two respects. First, we would like to obtain quality decompiled programs, i.e., simple and small. We refer to this as the effectiveness of the decompilation. Second, we would like the decompilation process to be as efficient as possible, both in terms of time and memory usage, in order to scale up in practice. We refer to this as the efficiency of the decompilation. With this aim, we propose several techniques for improving the partial evaluation strategy. We argue that our experimental results show that we are able to improve significantly the efficiency and effectiveness of the decompilation process.  相似文献   

12.
It is well known that financial returns are usually not normally distributed, but rather exhibit excess kurtosis. This implies that there is greater probability mass at the tails of the marginal or conditional distribution. Mixture-type time series models are potentially useful for modeling financial returns. However, most of these models make the assumption that the return series in each component is conditionally Gaussian, which may result in underestimates of the occurrence of extreme financial events, such as market crashes. In this paper, we apply the class of Student t-mixture autoregressive (TMAR) models to the return series of the Hong Kong Hang Seng Index. A TMAR model consists of a mixture of g autoregressive components with Student t-error distributions. Several interesting properties make the TMAR process a promising candidate for financial time series modeling. These models are able to capture serial correlations, time-varying means and volatilities, and the shape of the conditional distributions can be time-varied from short- to long-tailed or from unimodal to multi-modal. The use of Student t-distributed errors in each component of the model allows for conditional leptokurtic distribution, which can account for the commonly observed unconditional kurtosis in financial data.  相似文献   

13.
Skin attribute tests, especially for women, have become critical in the development of daily cosmetics in recent years. However, clinical skin attribute testing is often costly and time consuming. In this paper, a novel prediction approach based on questionnaires using recurrent neural network models is proposed for participants’ skin attribute prediction. The prediction engine, which is the most important part of this novel approach, is composed of three prediction models. Each of these models is a neural network allocated to predict different skin attributes: Tone, Spots, and Hydration. We also provide a detailed analysis and solution about the preprocessing of data, the selection of key features, and the evaluation of results. Our prediction system is much faster and more cost effective than traditional clinical skin attribute tests. The system performs very well, and the prediction results show good precision, especially for Tone.  相似文献   

14.
Stocks with similar financial ratio values across years have similar price movements. We investigate this hypothesis by clustering groups of stocks that exhibit homogeneous financial ratio values across years, and then study their price movements. We propose using cross-graph quasi-biclique (CGQB) subgraphs to cluster stocks, as they can define the three dimensional (3D) subspaces of financial ratios that the stocks are homogeneous in across the years, and they can also handle missing values that are rampant in the stock data. Furthermore, investors can easily analyze these 3D subspaces to explore the relations between the stocks and financial ratios. We develop a novel algorithm, CGQBminer, which mines the complete set of CGQB subgraphs from the stock data. Through experimental analysis, we show that the hypothesis is valid. Furthermore, we demonstrate that having an investment strategy which uses groups of stocks mined by CGQB subgraphs have higher returns than one that does not. We also conducted an extensive performance analysis on CGQBminer, and show that it is efficient across different 3D datasets and parameter settings.  相似文献   

15.
Complex Price Dynamics in a Financial Market with Imitation   总被引:1,自引:0,他引:1  
In this work a simple financial model with fundamentalists and imitators is being considered. In order to describe the price dynamics of the heterogeneous stock market, a synergetic approach is used and some global bifurcations arising in the model are being studied. It is shown that the fundamental equilibrium point P* may be destabilized through a subcritical Neimark–Sacker bifurcation and that two invariant closed curves, one attracting and one repelling, appear when P* is still stable. This particular bifurcation scenario allows us to show some noticeable features of the market that emerge when the imitation effect is emphasized. Among these features are, for instance, the volatility clusters associated with the presence of multistability (i.e. coexistence of attractors) and the hysteresis phenomenon.  相似文献   

16.
Financial volatility refers to the intensity of the fluctuations in the expected return on an investment or the pricing of a financial asset due to market uncertainties. Hence, volatility modeling and forecasting is imperative to financial market investors, as such projections allow the investors to adjust their trading strategies in anticipation of the impending financial market movements. Following this, financial volatility trading is the capitalization of the uncertainties of the financial markets to realize investment profits in times of rising, falling and side-way market conditions. In this paper, an intelligent straddle trading system (framework) that consists of a volatility projection module (VPM) and a trade decision module (TDM) is proposed for financial volatility trading via the buying and selling of option straddles to help a human trader capitalizes on the underlying uncertainties of the Hong Kong stock market. Three different measures, namely: (1) the historical volatility (HV), (2) implied volatility (IV) and (3) model-based volatility (MV) of the Hang Seng Index (HSI) are employed to quantify the implicit volatility of the Hong Kong stock market. The TDM of the proposed straddle trading system combines the respective volatility measures with the well-established moving-averages convergence/divergence (MACD) principle to recommend trading actions to a human trader dealing in HSI straddles. However, the inherent limitation of the MACD trading rule is that it generates time-delayed trading signals due to the use of moving averages, which are essentially lagging trend indicators. This drawback is intuitively addressed in the proposed straddle trading system by applying the VPM to compute future projections of the volatility measures of the HSI prior to the activation of the TDM. The VPM is realized by a self-organising neural-fuzzy semantic network named the evolving fuzzy semantic memory (eFSM) model. As compared to existing statistical and computational intelligence based modeling techniques currently employed for financial volatility modeling and forecasting, eFSM possesses several desirable attributes such as: (1) an evolvable knowledge base to continuously address the non-stationary characteristics of the Hong Kong stock market; (2) highly formalized human-like information computations; and (3) a transparent structure that can be interpreted via a set of linguistic IF–THEN semantic fuzzy rules. These qualities provide added credence to the computed HSI volatility projections. The volatility modeling and forecasting performances of the eFSM, when benchmarked to several established modeling techniques, as well as the observed trading returns of the proposed straddle trading system, are encouraging.  相似文献   

17.
We consider an Internet Service Provider’s (ISP’s) problem of providing end-to-end (e2e) services with bandwidth guarantees, using a path-vector based approach. In this approach, an ISP uses its edge-to-edge (g2g) single-domain contracts and vector of contracts purchased from neighboring ISPs as the building blocks to construct, or participate in constructing, an end-to-end “contract path”. We develop a spot-pricing framework for the e2e bandwidth guaranteed services utilizing this path contracting strategy, by formulating it as a stochastic optimization problem with the objective of maximizing expected profit subject to risk constraints. In particular, we present time-invariant path contracting strategies that offer high expected profit at low risks, and can be implemented in a fully distributed manner. Simulation analysis is employed to evaluate the contracting and pricing framework under different network and market conditions. An admission control policy based on the path contracting strategy is developed and its performance is analyzed using simulations.  相似文献   

18.
19.
A pattern is a model or a template used to summarize and describe the behavior (or the trend) of data having generally some recurrent events. Patterns have received a considerable attention in recent years and were widely studied in the data mining field. Various pattern mining approaches have been proposed and used for different applications such as network monitoring, moving object tracking, financial or medical data analysis, scientific data processing, etc. In these different contexts, discovered patterns were useful to detect anomalies, to predict data behavior (or trend) or, more generally, to simplify data processing or to improve system performance. However, to the best of our knowledge, patterns have never been used in the context of Web archiving. Web archiving is the process of continuously collecting and preserving portions of the World Wide Web for future generations. In this paper, we show how patterns of page changes can be useful tools to efficiently archive Websites. We first define our pattern model that describes the importance of page changes. Then, we present the strategy used to (i) extract the temporal evolution of page changes, (ii) discover patterns, to (iii) exploit them to improve Web archives. The archive of French public TV channels France Télévisions is chosen as a case study to validate our approach. Our experimental evaluation based on real Web pages shows the utility of patterns to improve archive quality and to optimize indexing or storing.  相似文献   

20.
Understanding how a program is constructed and how it functions are significant components of the task of maintaining or enhancing a computer program. We have analyzed vidoetaped protocols of experienced programmers as they enhanced a personnel data base program. Our analysis suggests that there are two strategies for program understanding, the systematic strategy and the as-needed strategy. The programmer using the systematic strategy traces data flow through the program in order to understand global program behavior. The programmer using the as-needed strategy focuses on local program behavior in order to localize study of the program. Our empirical data show that there is a strong relationship between using a systematic approach to acquire knowledge about the program and modifying the program successfully. Programmers who used the systematic approach to study the program constructed successful modifications; programmers who used the as-needed approach failed to construct successful modifications. Programmers who used the systematic strategy gathered knowledge about the causal interactions of the program's functional components. Programmers who used the as-needed strategy did not gather such causal knowledge and therefore failed to detect interactions among components of the program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号