Abstract: | An "Occam algorithm" learning model maintains a tentative hypothesis consistent with past observations and, when a new observation is inconsistent with the current hypothesis, updates to the next-simplest hypothesis consistent with all observations. In previous work, observations were assumed to be stochastically independent. This paper initiates study of such models under weaker Markovian assumptions on the observations, In the special case where the sequence of hypotheses satisfies a monotonicity condition, it is shown that the number of mistakes in classifying the first t observations is O(√t log 1/πi), where πi is the stationary probability of the initial state, i, of the Markov chain. |