共查询到8条相似文献,搜索用时 0 毫秒
1.
2.
Universal compression algorithms can detect recurring patterns in any type of temporal data—including financial data—for the purpose of compression. The universal algorithms actually find a model of the data that can be used for either compression or prediction. We present a universal Variable Order Markov (VOM) model and use it to test the weak form of the Efficient Market Hypothesis (EMH). The EMH is tested for 12 pairs of international intra-day currency exchange rates for one year series of 1, 5, 10, 15, 20, 25 and 30 min. Statistically significant compression is detected in all the time-series and the high frequency series are also predictable above random. However, the predictability of the model is not sufficient to generate a profitable trading strategy, thus, Forex market turns out to be efficient, at least most of the time. 相似文献
3.
4.
This paper describes a novel and practical Japanese parser that uses decision trees. First, we construct a single decision tree to estimate modification probabilities; how one phrase tends to modify another. Next, we introduce a boosting algorithm in which several decision trees are constructed and then combined for probability estimation. The constructed parsers are evaluated using the EDR Japanese annotated corpus. The single-tree method significantly outperforms the conventional Japanese stochastic methods. Moreover, the boosted version of the parser is shown to have great advantages; (1) a better parsing accuracy than its single-tree counterpart for any amount of training data and (2) no over-fitting to data for various iterations. The presented parser, the first non-English stochastic parser with practical performance, should tighten the coupling between natural language processing and machine learning. 相似文献
5.
The online computational burden of linear model predictive control (MPC) can be moved offline by using multi-parametric programming, so-called explicit MPC. The solution to the explicit MPC problem is a piecewise affine (PWA) state feedback function defined over a polyhedral subdivision of the set of feasible states. The online evaluation of such a control law needs to determine the polyhedral region in which the current state lies. This procedure is called point location; its computational complexity is challenging, and determines the minimum possible sampling time of the system. A new flexible algorithm is proposed which enables the designer to trade off between time and storage complexities. Utilizing the concept of hash tables and the associated hash functions, the proposed method solves an aggregated point location problem that overcomes prohibitive complexity growth with the number of polyhedral regions, while the storage–processing trade-off can be optimized via scaling parameters. The flexibility and power of this approach is supported by several numerical examples. 相似文献
6.
The definition of a strong solution to a stochastic differential-functional equation with the entire prehistory is given, and basic inequalities required for obtaining existence and uniqueness theorems are proved. Global existence and uniqueness theorems are proved. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 142–151, July–August 2008. 相似文献
7.
8.
Using a Two‐Level Structure to Manage the Point Location Problem in Explicit Model Predictive Control
下载免费PDF全文

The problem of determining the state region in which the current state point lies is referred to as the point location problem in the explicit model predictive control. In this paper, a two‐level structure to store the state regions is proposed and two efficient methods for solving the point‐location problem are developed; these are the two‐level grid (TLG) method and the grid‐BST method. The TLG method uses a tow‐level hash table. Before building the two‐level structure, the synonymy partitions are merged to reduce the memory storage demand. By setting each parameter in a triplet, the two‐level hash table can reach its optimal state and balance the complexity among the memory storage, reprocessing (offline computation) and the online computation. The grid‐BST method uses hash table as the first‐level structure and builds the binary search tree in the hash grid in which there are many partitions. This two‐level structure reduces reprocessing time significantly especially when the state partitions and the piecewise affine control laws (PWA control laws) are in a large number. Using the hyperplane (HP) as the node (not leaf node) of the tree, the method only stores all the different PWA control laws instead of the state partitions. The two proposed methods overcome the quick complexity growth when the number of polyhedral partitions increases and 2 numerical examples show the advantages of the proposed two methods. 相似文献