首页 | 本学科首页   官方微博 | 高级检索  
     


Links between probabilistic automata and hidden Markov models: probability distributions, learning models and induction algorithms
Authors:P. Dupont [Author Vitae]  F. Denis [Author Vitae]  Y. Esposito [Author Vitae]
Affiliation:a INGI, Université catholique de Louvain, Place Sainte-Barbe 2, B-1348 Louvain-la-Neuve, Belgium
b LIF-CMI, UMR 6166, 39, Rue F. Joliot Curie, 13453 Marseille Cedex 13, France
Abstract:
This article presents an overview of Probabilistic Automata (PA) and discrete Hidden Markov Models (HMMs), and aims at clarifying the links between them. The first part of this work concentrates on probability distributions generated by these models. Necessary and sufficient conditions for an automaton to define a probabilistic language are detailed. It is proved that probabilistic deterministic automata (PDFA) form a proper subclass of probabilistic non-deterministic automata (PNFA). Two families of equivalent models are described next. On one hand, HMMs and PNFA with no final probabilities generate distributions over complete finite prefix-free sets. On the other hand, HMMs with final probabilities and probabilistic automata generate distributions over strings of finite length. The second part of this article presents several learning models, which formalize the problem of PA induction or, equivalently, the problem of HMM topology induction and parameter estimation. These learning models include the PAC and identification with probability 1 frameworks. Links with Bayesian learning are also discussed. The last part of this article presents an overview of induction algorithms for PA or HMMs using state merging, state splitting, parameter pruning and error-correcting techniques.
Keywords:Probabilistic automata   Hidden Markov models   Grammar induction   PAC learning   Bayesian learning   Induction algorithms   HMM topology learning
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号