首页 | 本学科首页   官方微博 | 高级检索  
     


Generative multi-view and multi-feature learning for classification
Affiliation:1. Biometrics Research Center, Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China;2. Department of Computer and Information Science, University of Macau, Macau, China;3. Department of Computer Science, Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China;4. School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen), Shenzhen, China
Abstract:Multi-view based classification has attracted much attention in recent years. In general, an object can be represented with various views or modalities, and the exploitation of correlation across different views would contribute to improving the classification performance. However, each view can also be described with multiple features and this types of data is called multi-view and multi-feature data. Different from many existing multi-view methods which only model multiple views but ignore intrinsic information among the various features in each view, a generative bayesian model is proposed in this paper to not only jointly take the features and views into account, but also learn a discriminant representation across distinctive categories. A latent variable corresponding to each feature in each view is assumed and the raw feature is a projection of the latent variable from a more discriminant space. Particularly, the extracted variables in each view belonging to the same class are encouraged to follow the same gaussian distribution and those belonging to different classes are conducted to follow different distributions, greatly exploiting the label information. To optimize the presented approach, the proposed method is transformed into a class-conditional model and an effective algorithm is designed to alternatively estimate the parameters and variables. The experimental results on the extensive synthetic and four real-world datasets illustrate the effectiveness and superiority of our method compared with the state-of-the-art.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号