首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 110 毫秒
1.
针对传统随机森林算法在维度高、噪声大的文本分类上出现计算复杂度高和分类效果较差的问题,提出一种基于隐狄利克雷分配(LDA)主题模型的改进随机森林算法。该算法利用LDA主题模型对原始文本建立模型,将原始文本映射到主题空间上,保证了文本主旨与原始文本的一致性,同时也大大降低了文本噪声对分类的影响;并且针对随机森林中决策树特征的随机选择方法,提出在决策树生成过程中,利用对称不确定计算各个特征之间的相关性,从而可以降低不同决策树之间的关联度。最终在主题空间上利用改进的随机森林算法对文本进行分类。经过实验证明,该算法在文本分类上具有良好的优越性。  相似文献   

2.
中文短文本自身包含词汇个数少、描述信息能力弱,常用的文本分类方法对于短文本分类效果不理想。同时传统的文本分类方法在处理大规模文本分类时会出现向量维数很高的情况,造成算法效率低,而且一般用于长文本分类的特征选择方法都是基于数理统计的,忽略了文本中词项之间的语义关系。针对以上问题本文提出基于卡方特征选择和LDA主题模型的中文短文本分类方法,方法使用LDA主题模型的训练结果对传统特征选择方法进行特征扩展,以达到将数理信息和语义信息融入分类算法的目的。对比试验表明,这种方法提高了中文短文本分类效果。  相似文献   

3.
中文短文本自身包含词汇个数少、描述信息能力弱,常用的文本分类方法对于短文本分类效果不理想。同时传统的文本分类方法在处理大规模文本分类时会出现向量维数很高的情况,造成算法效率低,而且一般用于长文本分类的特征选择方法都是基于数理统计的,忽略了文本中词项之间的语义关系。针对以上问题本文提出基于卡方特征选择和LDA主题模型的中文短文本分类方法,方法使用LDA主题模型的训练结果对传统特征选择方法进行特征扩展,以达到将数理信息和语义信息融入分类算法的目的。对比试验表明,这种方法提高了中文短文本分类效果。  相似文献   

4.
LDA没有考虑到数据输入,在原始输入空间上对所有词进行主题标签,因对非作用词同样分配主题,致使主题分布不精确。针对它的不足,提出了一种结合LSI和LDA的特征降维方法,预先采用LSI将原始词空间映射到语义空间,再根据语义关系筛选出原始特征集中关键的特征,最后通过LDA模型在更小、更切题的文档子集上采样建模。对复旦大学中文语料进行文本分类,新方法的分类精度较单独使用LDA模型的效果提高了1.50%,实验表明提出的LSI_LDA模型在文本分类中有更好的分类性能。  相似文献   

5.
短文本特征稀疏、上下文依赖性强的特点,导致传统长文本分类技术不能有效地被直接应用。为了解决短文本特征稀疏的问题,提出基于Sentence-LDA主题模型进行特征扩展的短文本分类方法。该主题模型是隐含狄利克雷分布模型(Latent Dirichlet Allocation, LDA)的扩展,假设一个句子只产生一个主题分布。利用训练好的Sentence-LDA主题模型预测原始短文本的主题分布,从而将得到的主题词扩展到原始短文本特征中,完成短文本特征扩展。对扩展后的短文本使用支持向量机(Support Vector Machine, SVM)进行最后的分类。实验显示,与传统的基于向量空间模型(Vector Space Model,VSM)直接表示短文本的方法比较,本文提出的方法可以有效地提高短文本分类的准确率。  相似文献   

6.
针对中文短文本篇幅较短、特征稀疏性等特征,提出了一种基于隐含狄利克雷分布模型的特征扩展的短文本分类方法。在短文本原始特征的基础上,利用LDA主题模型对短文本进行预测,得到对应的主题分布,把主题中的词作为短文本的部分特征,并扩充到原短文本的特征中去,最后利用SVM分类方法进行短文本的分类。实验表明,该方法在性能上与传统的直接使用VSM模型来表示短文本特征的方法相比,对不同类别的短文本进行分类,都有不同程度的提高与改进,对于短文本进行补充LDA特征信息的方法是切实可行的。  相似文献   

7.
传统文本情感分类方法通常以词或短语等词汇信息作为文本向量模型特征,造成情感指向不明和隐藏观点遗漏的问题。针对此问题提出一种基于主题角色的文本情感分类方法。该方法首先提取出文本中的潜在评价对象形成评价对象集,评价对象作为情感句描述的主体能够很好地保存文本情感信息;然后使用LDA模型对评价对象集进行主题抽取,将抽取出的主题分裂成"正""负"两种特征项,将这两种特征项记为正负主题角色用于保存文本情感信息;最后,计算主题角色在文本中的情感影响值并建立主题角色模型。实验结果表明,所提方法与传统方法相比可有效提高主观性文本情感分类的准确率。  相似文献   

8.
SVM分类算法处理高维数据具有较大优势,但其未考虑语义的相似性度量问题,而LDA主题模型可以解决传统的文本分类中相似性度量和主题单一性问题.为了充分结合SVM和LDA算法的优势并提高分类精确度,提出了一种新的LDA-wSVM高效分类算法模型.利用LDA主题模型进行建模和特征选择,确定主题数和隐主题—文本矩阵;在经典权重计算方法上作改进,考虑各特征项与类别的关联度,设计了一种新的权重计算方法;在特征词空间上使用这种基于权重计算的wSVM分类器进行分类.实验基于R软件平台对搜狗实验室的新闻文本集进行分类,得到了宏平均值为0.943的高精确度分类结果.实验结果表明,提出的LDA-wSVM模型在文本自动分类中具有很好的优越性能.  相似文献   

9.
针对传统文本分类过程中词表示特征时不够全面、可解释性差的问题,提出一种基于词和事件主题的W-E CNN文本分类方法,并给出基于BTM的事件主题模型.将传统基于词的特征表示方法与事件主题特征表示方法进行拼接作为CNN的输入,丰富特征语义信息,提高了文本分类的准确性.实验分析可知,该方法的分类准确性在一定程度上要优于其他方法.  相似文献   

10.
基于LDA主题模型的文本相似度计算   总被引:1,自引:0,他引:1  
王振振  何明  杜永萍 《计算机科学》2013,40(12):229-232
LDA(Latent Dirichlet Allocation)模型是近年来提出的一种具有文本表示能力的非监督学习模型。提出了一种基于LDA主题模型的文本相似度计算方法,该方法利用LDA为语料库建模,利用MCMC中的Gibbs抽样进行推理,间接计算模型参数,挖掘隐藏在文本内的不同主题与词之间的关系,得到文本的主题分布,并以此分布来计算文本之间的相似度,最后对文本相似度矩阵进行聚类实验来评估聚类效果。实验结果表明,该方法能够明显提高文本相似度计算的准确率和文本聚类效果。  相似文献   

11.
Cross-domain word representation aims to learn high-quality semantic representations in an under-resourced domain by leveraging information in a resourceful domain. However, most existing methods mainly transfer the semantics of common words across domains, ignoring the semantic relations among domain-specific words. In this paper, we propose a domain structure-based transfer learning method to learn cross-domain representations by leveraging the relations among domain-specific words. To accomplish this, we first construct a semantic graph to capture the latent domain structure using domain-specific co-occurrence information. Then, in the domain adaptation process, beyond domain alignment, we employ Laplacian Eigenmaps to ensure the domain structure is consistently distributed in the learned embedding space. As such, the learned cross-domain word representations not only capture shared semantics across domains, but also maintain the latent domain structure. We performed extensive experiments on two tasks, namely sentiment analysis and query expansion. The experiment results show the effectiveness of our method for tasks in under-resourced domains.  相似文献   

12.
We study several techniques for representing, fusing and comparing content representations of news documents. As underlying models we consider the vector space model (both in a term setting and in a latent semantic analysis setting) and probabilistic topic models based on latent Dirichlet allocation. Content terms can be classified as topical terms or named entities, yielding several models for content fusion and comparison. All used methods are completely unsupervised. We find that simple methods can still outperform the current state-of-the-art techniques.  相似文献   

13.
Model-based invariants are relations between model parameters and image measurements, which are independent of the imaging parameters. Such relations are true for all images of the model. Here we describe an algorithm which, given L independent model-based polynomial invariants describing some shape, will provide a linear re-parameterization of the invariants. This re-parameterization has the properties that: (i) it includes the minimal number of terms, and (ii) the shape terms are the same in all the model-based invariants. This final representation has 2 main applications: (1) it gives new representations of shape in terms of hyperplanes, which are convenient for object recognition; (2) it allows the design of new linear shape from motion algorithms. In addition, we use this representation to identify object classes that have universal invariants.  相似文献   

14.
Proof-theory has traditionally been developed based on linguistic (symbolic) representations of logical proofs. Recently, however, logical reasoning based on diagrammatic or graphical representations has been investigated by logicians. Euler diagrams were introduced in the eighteenth century. But it is quite recent (more precisely, in the 1990s) that logicians started to study them from a formal logical viewpoint. We propose a novel approach to the formalization of Euler diagrammatic reasoning, in which diagrams are defined not in terms of regions as in the standard approach, but in terms of topological relations between diagrammatic objects. We formalize the unification rule, which plays a central role in Euler diagrammatic reasoning, in a style of natural deduction. We prove the soundness and completeness theorems with respect to a formal set-theoretical semantics. We also investigate structure of diagrammatic proofs and prove a normal form theorem.  相似文献   

15.
李岩  张博文  郝红卫 《计算机应用》2016,36(9):2526-2530
针对传统查询扩展方法在专业领域中扩展词与原始查询之间缺乏语义关联的问题,提出一种基于语义向量表示的查询扩展方法。首先,构建了一个语义向量表示模型,通过对语料库中词的上下文语义进行学习,得到词的语义向量表示;其次,根据词语义向量表示,计算词之间的语义相似度;然后,选取与查询中词汇的语义最相似的词作为查询的扩展词,扩展原始查询语句;最后,基于提出的查询扩展方法构建了生物医学文档检索系统,针对基于维基百科或WordNet的传统查询扩展方法和BioASQ 2014—2015参加竞赛的系统进行对比实验和显著性差异指标分析。实验结果表明,基于语义向量表示查询扩展的检索方法所得到结果优于传统查询扩展方法的结果,平均准确率至少提高了1个百分点,在与竞赛系统的对比中,系统的效果均有显著性提高。  相似文献   

16.
A latent variable regression algorithm with a regularization term(r LVR) is proposed in this paper to extract latent relations between process data X and quality data Y. In rLVR,the prediction error between X and Y is minimized, which is proved to be equivalent to maximizing the projection of quality variables in the latent space. The geometric properties and model relations of rLVR are analyzed, and the geometric and theoretical relations among r LVR, partial least squares, and canonical correlation analysis are also presented. The rLVR-based monitoring framework is developed to monitor process-relevant and quality-relevant variations simultaneously. The prediction and monitoring effectiveness of rLVR algorithm is demonstrated through both numerical simulations and the Tennessee Eastman(TE) process.  相似文献   

17.
18.
Dropout and other feature noising schemes have shown promise in controlling over-fitting by artificially corrupting the training data. Though extensive studies have been performed for generalized linear models, little has been done for support vector machines (SVMs), one of the most successful approaches for supervised learning. This paper presents dropout training for both linear SVMs and the nonlinear extension with latent representation learning. For linear SVMs, to deal with the intractable expectation of the non-smooth hinge loss under corrupting distributions, we develop an iteratively re-weighted least square (IRLS) algorithm by exploring data augmentation techniques. Our algorithm iteratively minimizes the expectation of a reweighted least square problem, where the re-weights are analytically updated. For nonlinear latent SVMs, we consider learning one layer of latent representations in SVMs and extend the data augmentation technique in conjunction with first-order Taylor-expansion to deal with the intractable expected hinge loss and the nonlinearity of latent representations. Finally, we apply the similar data augmentation ideas to develop a new IRLS algorithm for the expected logistic loss under corrupting distributions, and we further develop a non-linear extension of logistic regression by incorporating one layer of latent representations. Our algorithms offer insights on the connection and difference between the hinge loss and logistic loss in dropout training. Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of both linear and nonlinear SVMs.  相似文献   

19.
Functional dependencies in relational databases are investigated. Eight binary relations, viz., (1) dependency relation, (2) equipotence relation, (3) dissidence relation, (4) completion relation, and dual relations of each of them are described. Any one of these eight relations can be used to represent the functional dependencies in a database. Results from linear graph theory are found helpful in obtaining these representations. The dependency relation directly gives the functional dependencies. The equipotence relation specifies the dependencies in terms of attribute sets which functionally determine each other. The dissidence relation specifies the dependencies in terms of saturated sets in a very indirect way. Completion relation represents the functional dependencies as a function, the range of which turns out to be a lattice. Depletion relation which is the dual of the completion relation can also represent functional dependencies and similarly can the duals of dependency, equipotence, and dissidence relations. The class of depleted sets, which is the dual of saturated sets, is defined and used in the study of depletion relations.  相似文献   

20.
Roth D  Yang MH  Ahuja N 《Neural computation》2002,14(5):1071-1103
A learning account for the problem of object recognition is developed within the probably approximately correct (PAC) model of learnability. The key assumption underlying this work is that objects can be recognized (or discriminated) using simple representations in terms of syntactically simple relations over the raw image. Although the potential number of these simple relations could be huge, only a few of them are actually present in each observed image, and a fairly small number of those observed are relevant to discriminating an object. We show that these properties can be exploited to yield an efficient learning approach in terms of sample and computational complexity within the PAC model. No assumptions are needed on the distribution of the observed objects, and the learning performance is quantified relative to its experience. Most important, the success of learning an object representation is naturally tied to the ability to represent it as a function of some intermediate representations extracted from the image. We evaluate this approach in a large-scale experimental study in which the SNoW learning architecture is used to learn representations for the 100 objects in the Columbia Object Image Library. Experimental results exhibit good generalization and robustness properties of the SNoW-based method relative to other approaches. SNoW's recognition rate degrades more gracefully when the training data contains fewer views, and it shows similar behavior in some preliminary experiments with partially occluded objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号