On the relation between discriminant analysis and mutual information for supervised linear feature extraction |
| |
Authors: | Sergios Petridis [Author Vitae] [Author Vitae] |
| |
Affiliation: | Computational Intelligence Laboratory, Institute of Informatics and Telecommunications, National Center for Scientific Research “Demokritos”, 15310 Aghia Paraskevi, Athens, Greece |
| |
Abstract: | This paper provides a unifying view of three discriminant linear feature extraction methods: linear discriminant analysis, heteroscedastic discriminant analysis and maximization of mutual information. We propose a model-independent reformulation of the criteria related to these three methods that stresses their similarities and elucidates their differences. Based on assumptions for the probability distribution of the classification data, we obtain sufficient conditions under which two or more of the above criteria coincide. It is shown that these conditions also suffice for Bayes optimality of the criteria. Our approach results in an information-theoretic derivation of linear discriminant analysis and heteroscedastic discriminant analysis. Finally, regarding linear discriminant analysis, we discuss its relation to multidimensional independent component analysis and derive suboptimality bounds based on information theory. |
| |
Keywords: | Linear feature extraction Linear discriminant analysis Heteroscedastic discriminant analysis Maximization of mutual information Bayes error Negentropy |
本文献已被 ScienceDirect 等数据库收录! |