排序方式: 共有7条查询结果,搜索用时 46 毫秒
1
1.
V. V. V'yugin 《Information and Computation》2002,175(2):146
The problem of existence of predictive complexity for the absolute loss game is studied. Predictive complexity is a generalization of Kolmogorov complexity which bounds the ability of any algorithm to predict elements of a sequence of outcomes. For perfectly mixable loss functions (logarithmic and squared difference are among them) predictive complexity is defined like Kolmogorov complexity to within an additive constant. The absolute loss function is not perfectly mixable, and the question of existence of the corresponding predictive complexity, which is defined to within an additive constant, is open. We prove that in the case of the absolute loss game the predictive complexity can be defined to within an additive term O(
), where n is the length of a sequence of outcomes. We prove also that in some restricted settings this bound cannot be improved. 相似文献
2.
3.
Conditions are presented under which the maximum of the Kolmogorov complexity (algorithmic entropy) K(1...
N
) is attained, given the cost
f(
i
) of a message 1...
N
. Various extremal relations between the message cost and the Kolmogorov complexity are also considered; in particular, the minimization problem for the function
f(
i
) – K(1...
N
) is studied. Here, is a parameter, called the temperature by analogy with thermodynamics. We also study domains of small variation of this function. 相似文献
4.
V. V. V'yugin 《Information and Computation》2001,169(2):252
The central problem in machine learning (and statistics) is the problem of predicting future events xn+1 based on past observations x1x2…xn, where n=1, 2…. The main goal is to find a method of prediction that minimizes the total loss suffered on a sequence x1x2…xn+1 for n=1, 2…. We say that a data sequence is stochastic if there exists a simply described prediction algorithm whose performance is close to the best possible one. This optimal performance is defined in terms of Vovk's predictive complexity, which is a generalization of the notion of Kolmogorov complexity. Predictive complexity gives a limit on the predictive performance of simply described prediction algorithms. In this paper we argue that data sequences normally occurring in the real world are stochastic; more formally, we prove that Levin's a priori semimeasure of nonstochastic sequences is small. 相似文献
5.
6.
Main laws of probability theory, when applied to individual sequences, have a robustness property under small violations of randomness. For example, the law of large numbers for the symmetric Bernoulli scheme holds for a sequence where the randomness deficiency of its initial fragment of length n grows as o(n). The law of iterated logarithm holds if the randomness deficiency grows as o(loglogn). We prove that Birkhoff's individual ergodic theorem is nonrobust in this sense. If the randomness deficiency grows arbitrarily slowly on initial fragments of an infinite sequence, this theorem can be violated. An analogous nonrobustness property holds for the Shannon–McMillan–Breiman theorem. 相似文献
7.
The Lempel–Ziv universal coding scheme is asymptotically optimal for the class of all stationary ergodic sources. A problem of robustness of this property under small violations of ergodicity is studied. The notion of deficiency of algorithmic randomness is used as a measure of disagreement between data sequence and probability measure. We prove that universal compression schemes from a large class are nonrobust in the following sense: If the randomness deficiency grows arbitrarily slowly on initial fragments of an infinite sequence, then the property of asymptotic optimality of any universal compression algorithm can be violated. Lempel–Ziv compression algorithms are robust on infinite sequences generated by ergodic Markov chains when the randomness deficiency of their initial fragments of length n grows as o(n). 相似文献
1