首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
文本分类任务是自然语言处理领域内一个重要的研究问题.近年来,因处理复杂网络结构的出色能力,图神经网络模型(Graph Neural Network,GNN)受到广泛关注并被引入到文本分类任务中.在之前的研究中,基于图卷积网络(Graph Convolu-tional Neural Network,GCN)的分类模型使用...  相似文献   

2.
现有基于图卷积网络的文本分类模型通常只是通过邻接矩阵简单地融合不同阶的邻域信息来更新节点表示,导致节点的词义信息表达不够充分。此外,基于常规注意力机制的模型只是对单词向量进行正向加权表示,忽略了产生消极作用的单词对最终分类的影响。为了解决上述问题,文中提出了一种基于双向注意力机制和门控图卷积网络的模型。该模型首先利用门控图卷积网络有选择地融合图中节点的多阶邻域信息,保留了之前阶的信息,以此丰富节点的特征表示;其次通过双向注意力机制学习不同单词对分类结果的影响,在给予对分类起积极作用的单词正向权重的同时,对产生消极作用的单词给予负向权重以削弱其在向量表示中的影响,从而提升模型对文档中不同性质节点的甄别能力;最后通过最大池化和平均池化融合单词的向量表示,得到文档表示用于最终分类。在4个基准数据集上进行了实验,结果表明,该方法明显优于基线模型。  相似文献   

3.
图卷积神经网络在文本分类领域受到广泛关注,但同时存在过平滑的问题。此外,现有研究中掩码机制是在文本结构上进行融合,可能并不完全适用于基于图卷积神经网络的文本分类方法。因此,该文针对图结构提出了融合掩码机制的图卷积神经网络MaskGCN,直接将掩码机制引入文本图结构,并采用全局共享矩阵动态构建文本级别的多粒度文本图。在THUCNews、今日头条和SougoCS数据集上的实验表明,该文模型在有效抑制过平滑的同时,相比于其他文本分类模型取得了较优的结果。  相似文献   

4.
近年来,图神经网络模型因其对非欧氏数据的建模和对全局依赖关系的捕获能力而广泛应用于文本分类任务。现有的基于图卷积网络的分类模型中的构图方法存在消耗内存过大、难以适应新文本等问题。此外,现有研究中用于描述图节点间的全局依赖关系的方法并不完全适用于分类任务。为解决上述问题,该文设计并提出了基于概率分布的文本分类网络模型,以语料库中的词和标签为节点构建标签-词异构关系图,利用词语在各标签上的概率分布描述节点间的全局依赖关系,并通过图卷积操作进行文本表示学习。在5个公开的文本分类数据集上的实验表明,该文提出的模型在有效缩减图尺寸的同时,相比于其他文本分类网络模型取得了较为先进的结果。  相似文献   

5.
图卷积神经网络GCN已经广泛应用于文本分类任务中,但GCN在文本分类时仅仅根据词语的共现关系来构建文本图,忽略了文本语言本身的规律关系,如语义关系与句法关系,并且GCN不善于提取文本上下文特征和序列特征。针对上述问题,该文提出了一种文本分类模型SEB-GCN,其在文本词共现图的基础上加入了句法文本图与语义文本图,再引入ERNIE和残差双层BiGRU网络来对文本特征进行更深入的学习,从而提高模型的分类效果。实验结果表明,该文提出的SEB-GCN模型在四个新闻数据集上,分类精确度对比其他模型分别提高4.77%、4.4%、4.8%、3.4%、3%,且分类收敛速度也明显快于其他模型。  相似文献   

6.
针对图嵌入式文本分类方法在预测性能和归纳能力方面的缺陷,在文本图卷积网络(TextGCN)的基础上,进行适当改进。结合预测文本嵌入(PTE)的高效训练和归纳性,在各个网络层中使用不同的图;通过异质图卷积网络架构来学习特征嵌入,利用习得的特征进行归纳推理。实验结果表明,在大量训练样本标注的情况下,所提方法取得了与其它方法相当或稍优的性能。在少量训练样本标注的情况下,所提方法表现更优,性能增益范围为2%~7%,支持更快的训练和泛化性。  相似文献   

7.
针对文本分类任务中标注数量少的问题,提出了一种基于词共现与图卷积相结合的半监督文本分类方法。模型使用词共现方法统计语料库中单词的词共现信息,过滤词共现信息建立一个包含单词节点和文档节点的大型图结构的文本图,将文本图中邻接矩阵和关于节点的特征矩阵输入到结合注意力机制的图卷积神经网络中实现了对文本的分类。实验结果表明,与目前多种文本分类算法相比,该方法在经典数据集20NG、Ohsumed和MR上均取得了更好的效果。  相似文献   

8.
9.
随着图卷积网络的发展,图卷积网络已经应用到很多任务中,其中就包含文本分类任务.通过将文本数据表示成图数据,进而在图上应用图卷积,从而捕获文本的结构信息和单词间的长距离依赖关系获得了良好的分类效果.但将文本建模成图模型后,图卷积网络面临着文本上下文语义信息和局部特征信息表示不充分的问题.提出一种新的模型,利用双向长短时记...  相似文献   

10.
由于短文本长度较短,在分类时会面临数据稀疏和语义模糊等问题.提出新型图卷积网络BTM_GCN,该网络利用双项主题模型(Biterm Topic Model,BTM)在短文本数据集上训练出固定数量的文档级潜在主题,并作为一种节点嵌入到文本异构图中,再与异构图中的文档节点进行连接,最后利用图卷积网络来捕获文档、词与主题节点...  相似文献   

11.
文本情感分类是指通过挖掘和分析文本中的观点、意见和看法等主观信息,对文本的情感倾向做出类别判断。基于集成情感成员模型提出一种文本情感分析方法。把基于改进的神经网络、基于语义特征和基于条件随机场的三个情感分类模型作为成员模型集成在一起。集成后的模型能够涵盖不同的情感特征,从而克服了传统集成学习中仅关注成员模型处理结果的不足。以公开语料进行实验,集成模型融合了多个成员模型的优势,分类正确率达到了88.2%,远高于任一成员模型的效果。  相似文献   

12.
Network traffic classification based on ensemble learning and co-training   总被引:4,自引:0,他引:4  
Classification of network traffic is the essential step for many network researches. However,with the rapid evolution of Internet applications the effectiveness of the port-based or payload-based identifi-cation approaches has been greatly diminished in recent years. And many researchers begin to turn their attentions to an alternative machine learning based method. This paper presents a novel machine learning-based classification model,which combines ensemble learning paradigm with co-training tech-niques. Compared to previous approaches,most of which only employed single classifier,multiple clas-sifiers and semi-supervised learning are applied in our method and it mainly helps to overcome three shortcomings:limited flow accuracy rate,weak adaptability and huge demand of labeled training set. In this paper,statistical characteristics of IP flows are extracted from the packet level traces to establish the feature set,then the classification model is created and tested and the empirical results prove its feasibility and effectiveness.  相似文献   

13.
针对现有网络表示学习方法泛化能力较弱等问题,提出了将stacking集成思想应用于网络表示学习的方法,旨在提升网络表示性能。首先,将3个经典的浅层网络表示学习方法DeepWalk、Node2Vec、Line作为并列的初级学习器,训练得到三部分的节点嵌入拼接后作为新数据集;然后,选择图卷积网络(graph convolutional network, GCN)作为次级学习器对新数据集和网络结构进行stacking集成得到最终的节点嵌入,GCN处理半监督分类问题有很好的效果,因为网络表示学习具有无监督性,所以利用网络的一阶邻近性设计损失函数;最后,设计评价指标分别评价初级学习器和集成后的节点嵌入。实验表明,选用GCN集成的效果良好,各评价指标平均提升了1.47~2.97倍。  相似文献   

14.
Consider a supervised learning problem in which examples contain both numerical- and text-valued features. To use traditional feature-vector-based learning methods, one could treat the presence or absence of a word as a Boolean feature and use these binary-valued features together with the numerical features. However, the use of a text-classification system on this is a bit more problematic—in the most straight-forward approach each number would be considered a distinct token and treated as a word. This paper presents an alternative approach for the use of text classification methods for supervised learning problems with numerical-valued features in which the numerical features are converted into bag-of-words features, thereby making them directly usable by text classification methods. We show that even on purely numerical-valued data the results of text classification on the derived text-like representation outperforms the more naive numbers-as-tokens representation and, more importantly, is competitive with mature numerical classification methods such as C4.5, Ripper, and SVM. We further show that on mixed-mode data adding numerical features using our approach can improve performance over not adding those features.  相似文献   

15.
Despite significant successes achieved in knowledge discovery,traditional machine learning methods may fail to obtain satisfactory performances when dealing with complex data,such as imbalanced,high-dimensional,noisy data,etc.The reason behind is that it is difficult for these methods to capture multiple characteristics and underlying structure of data.In this context,it becomes an important topic in the data mining field that how to effectively construct an efficient knowledge discovery and mining model.Ensemble learning,as one research hot spot,aims to integrate data fusion,data modeling,and data mining into a unified framework.Specifically,ensemble learning firstly extracts a set of features with a variety of transformations.Based on these learned features,multiple learning algorithms are utilized to produce weak predictive results.Finally,ensemble learning fuses the informative knowledge from the above results obtained to achieve knowledge discovery and better predictive performance via voting schemes in an adaptive way.In this paper,we review the research progress of the mainstream approaches of ensemble learning and classify them based on different characteristics.In addition,we present challenges and possible research directions for each mainstream approach of ensemble learning,and we also give an extra introduction for the combination of ensemble learning with other machine learning hot spots such as deep learning,reinforcement learning,etc.  相似文献   

16.
提出一种融合相似度图和随机游走模型的多标签短文本分类算法.首先,以样本数据和标签为节点创建相似度图,借助外部知识库计算样本与标签之间的权重,得到预测样本与标签集合之间的匹配度.然后,将多标签数据映射成多标签依赖图,在图上进行重启随机游走,并将已获得的匹配度作为初始预测值,计算每个节点的概率分布,直到概率分布趋于稳定时,...  相似文献   

17.
Text categorization is one of the most common themes in data mining and machine learning fields. Unlike structured data, unstructured text data is more difficult to be analyzed because it contains complicated both syntactic and semantic information. In this paper, we propose a two-level representation model (2RM) to represent text data, one is for representing syntactic information and the other is for semantic information. Each document, in syntactic level, is represented as a term vector where the value of each component is the term frequency and inverse document frequency. The Wikipedia concepts related to terms in syntactic level are used to represent document in semantic level. Meanwhile, we designed a multi-layer classification framework (MLCLA) to make use of the semantic and syntactic information represented in 2RM model. The MLCLA framework contains three classifiers. Among them, two classifiers are applied on syntactic level and semantic level in parallel. The outputs of these two classifiers will be combined and input to the third classifier, so that the final results can be obtained. Experimental results on benchmark data sets (20Newsgroups, Reuters-21578 and Classic3) have shown that the proposed 2RM model plus MLCLA framework improves the text classification performance by comparing with the existing flat text representation models (Term-based VSM, Term Semantic Kernel Model, Concept-based VSM, Concept Semantic Kernel Model and Term + Concept VSM) plus existing classification methods.  相似文献   

18.
Traditional approaches for text data stream classification usually require the manual labeling of a number of documents, which is an expensive and time consuming process. In this paper, to overcome this limitation, we propose to classify text streams by keywords without labeled documents so as to reduce the burden of labeling manually. We build our base text classifiers with the help of keywords and unlabeled documents to classify text streams, and utilize classifier ensemble algorithms to cope with concept drifting in text data streams. Experimental results demonstrate that the proposed method can build good classifiers by keywords without manual labeling, and when the ensemble based algorithm is used, the concept drift in the streams can be well detected and adapted, which performs better than the single window algorithm.  相似文献   

19.
针对有特殊结构的文本,传统的文本分类算法已经不能满足需求,为此提出一种基于多示例学习框架的文本分类算法。将每个文本当作一个示例包,文本中的标题和正文视为该包的两个示例;利用基于一类分类的多类分类支持向量机算法,将包映射到高维特征空间中;引入高斯核函数训练分类器,完成对无标记文本的分类预测。实验结果表明,该算法相较于传统的机器学习分类算法具有更高的分类精度,为具有特殊文本结构的文本挖掘领域研究提供了新的角度。  相似文献   

20.
标题分类是对一个标题性语句进行分类,通常这个标题是不超过20个字的短文本,内容精炼概括性强。针对标题文本的特征稀疏性和含义不确定性,提出了一种融合随机森林与贝叶斯多项式的标题分类算法。该算法把贝叶斯多项式模型引入到随机森林底层分类器构建过程中,同时利用随机森林附带的OOB数据提出了一种基于二维权重分布的投票机制。最后在图书馆真实书目数据上进行实验,针对分类性能与当前基于LDA主题扩展的SVM算法进行对比。实验结果表明在一定条件下,该方法性能稳定,表现较佳。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号