首页 | 本学科首页   官方微博 | 高级检索  
     

基于视觉字典容量自动获取的LDA场景分类研究
引用本文:张艺,钟映春,陈俊彬. 基于视觉字典容量自动获取的LDA场景分类研究[J]. 广东工业大学学报, 2015, 32(4): 150-154. DOI: 10.3969/j.issn.1007-7162.2015.04.027
作者姓名:张艺  钟映春  陈俊彬
作者单位:广东工业大学自动化学院,广东 广州 510006
基金项目:广东省科技计划项目(2010A030500006 )
摘    要:提出了一种高效获取词包模型中视觉字典容量的方法,并研究了该方法与隐狄利克雷分配模型(Latent Dirichlet Allocation,LDA )相结合情况下的场景分类性能.在用SIFT特征构建场景图像数据集特征矩阵的基础上,首先采用吸引子传播方法获取场景图像集特征矩阵的合理聚类数目族,并将其中的最小聚类数目作为视觉字典容量,进而生成视觉字典;然后利用所构建视觉字典中的单词描述场景图像训练集和测试集;最后采用LDA模型对场景图像测试集进行场景分类实验.实验结果表明,提出的方法不仅保持了较高场景分类准确率,同时显著提高了场景分类的效率.

关 键 词:词包模型; 视觉单词; 视觉字典; 隐狄利克雷分配模型  
收稿时间:2014-09-09

Research on Scene Classification of LDA Automatically Obtained by Visual Dictionary Capacity
ZHANG Yi,ZHONG Ying-Chun,CHEN Jun-Bin. Research on Scene Classification of LDA Automatically Obtained by Visual Dictionary Capacity[J]. Journal of Guangdong University of Technology, 2015, 32(4): 150-154. DOI: 10.3969/j.issn.1007-7162.2015.04.027
Authors:ZHANG Yi  ZHONG Ying-Chun  CHEN Jun-Bin
Affiliation:School of Automation, Guangdong University of Technology, Guangzhou 510006, China
Abstract:An approach is proposed to obtain the dictionary capacity of bag of words(BoW) model efficiently, which is combined with The Latent Dirichlet Allocation (LDA) model to analyze the performance of scene category. Based on the feature matrix of scene image data sets constructed by SIFT feature, the affinity propagation method is firstly employed to obtain the clustering numbers, and to take the minimal clustering number as the visual dictionary capacity before generating a visual dictionary. Secondly, the scene training and testing sets are described by these visual words. Finally, the LDA model is employed to classify the testing data set. The experiments show that the proposed approach in this paper maintains higher accuracy of scene classification and can improve efficiency greatly.
Keywords:bag of words   visual words   visual dictionary   LDA model,
本文献已被 万方数据 等数据库收录!
点击此处可从《广东工业大学学报》浏览原始摘要信息
点击此处可从《广东工业大学学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号