首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 84 毫秒
1.
中文短文本分类方法研究   总被引:1,自引:0,他引:1  
区别于传统的基于词的中文短文本自动分类方法,以训练数据作为背景语料,利用关联规则挖掘算法挖掘训练集文本中的共现关系,创建特征共现集作为扩展词表。用特征共现集分别对训练文本和测试文本进行特征扩展建立短文本分类模型。实验表明,改进后的两种方法使短文本分类系统具有较高的精度。  相似文献   

2.
随着信息技术的迅速发展,网络中产生了海量的中文短文本数据.利用中文短文本分类技术,在低信息量的数据中挖掘出有价值的信息是当前的一个研究热点.相比中文长文本,中文短文本具有字数少、歧义多以及信息不规范等特点,导致其文本特征难以提取与表达.为此,文中提出了一种基于混合字词特征深度神经网络模型的中文短文本分类算法.首先,该算...  相似文献   

3.
短文本通常是由几个到几十个词组成,长度短、特征稀疏,导致短文本分类的准确率难以提升。为了解决此问题,提出了一种基于局部语义特征与上下文关系融合的中文短文本分类算法,称为Bi-LSTM_CNN_AT,该算法利用CNN提取文本的局部语义特征,利用Bi-LSTM提取文本的上下文语义特征,并结合注意力机制,使得Bi-LSTM_CNN_AT模型能从众多的特征中提取出和当前任务最相关的特征,更好地进行文本分类。实验结果表明,Bi-LSTM_CNN_AT模型在NLP&CC2017的新闻标题分类数据集18个类别中的分类准确率为81.31%,比单通道的CNN模型提高2.02%,比单通道的Bi-LSTM模型提高1.77%。  相似文献   

4.
针对现有中文短文本分类算法通常存在特征稀疏、用词不规范和数据海量等问题,提出一种基于Transformer的双向编码器表示(BERT)的中文短文本分类算法,使用BERT预训练语言模型对短文本进行句子层面的特征向量表示,并将获得的特征向量输入Softmax回归模型进行训练与分类。实验结果表明,随着搜狐新闻文本数据量的增加,该算法在测试集上的整体F1值最高达到93%,相比基于TextCNN模型的短文本分类算法提升6个百分点,说明其能有效表示句子层面的语义信息,具有更好的中文短文本分类效果。  相似文献   

5.
一种基于特征扩展的中文短文本分类方法   总被引:2,自引:2,他引:0  
针对短文本所描述信号弱的特点,提出一种基于特征扩展的中文短文本分类方法。该方法首先利用FP Growth算法挖掘训练集特征项与测试集特征项之间的共现关系,然后用得到的关联规则对短文本测试文档中的概念词语进行特征扩展。同时,引入语义信息并且改进了知网中DEF词条的描述能力公式,在此基础上对中文短文本进行分类。实验证明,这种方法具有高的分类性能,其微平均和宏平均值都高于常规的文本分类方法。  相似文献   

6.
KNN短文本分类算法通过扩充短文本内容提高短文本分类准确率,却导致短文本分类效率降低。鉴于此,通过卡方统计方法提取训练空间中各类别的类别特征,根据训练空间中各类别样本与该类别特征的相似情况,对已有的训练空间进行拆分细化,将训练空间中的每个类别细化为多个包含部分样本的训练子集;然后针对测试文本,从细化后的训练空间中提取与测试文本相似度较高的类别特征所对应的训练子集的样本来重构该测试文本的训练集合,减少KNN短文本分类算法比较文本对数,从而提高KNN短文本分类算法的效率。实验表明,与基于知网语义的KNN短文本分类算法相比,本算法提高KNN短文本分类算法效率近50%,分类的准确性也有一定的提升。  相似文献   

7.
微博、脸书等社交网络平台涌现的短文本数据流具有海量、高维稀疏、快速可变等特性,使得短文本数据流分类面临着巨大挑战。已有的短文本数据流分类方法难以有效地解决特征高维稀疏问题,并且在处理海量数据流时时间代价较高。基于此,提出一种基于Spark的分布式快速短文本数据流分类方法。一方面,利用外部语料库构建Word2vec词向量模型解决了短文本的高维稀疏问题,并构建扩展词向量库以适应文本的快速可变性,提出一种LR分类器集成模型用于短文本数据流分类,该分类器使用一种FTRL方法实现模型参数的在线更新,并引入时间因子加权机制以适应概念漂移环境;另一方面,所提方法的使用分布式处理提高了海量短文本数据流的处理效率。在3个真实短文本数据流上的实验表明:所提方法在提高分类精度的同时,降低了时间消耗。  相似文献   

8.
社交网络平台产生海量的短文本数据流,具有快速、海量、概念漂移、文本长度短小、类标签大量缺失等特点.为此,文中提出基于向量表示和标签传播的半监督短文本数据流分类算法,可对仅含少量有标记数据的数据集进行有效分类.同时,为了适应概念漂移,提出基于聚类簇的概念漂移检测算法.在实际短文本数据流上的实验表明,相比半监督分类算法和半监督数据流分类算法,文中算法不仅提高分类精度和宏平均,还能快速适应数据流中的概念漂移.  相似文献   

9.
中文短文本具有长度短以及上下文依赖强的特点。目前主流的基于词向量的双向循环神经网络分类方法依赖于词向量语义表达以及上下文特征提取能力,其分类准确率有待进一步改善。针对此问题,论文提出融合语义增强的中文短文本分类方法,该方法提出融合语义增强的思想,在词向量表示阶段,引入Bert生成融合字、文本以及位置的向量作为训练文本的词表征进行文本语义增强,接着输送到Bi-GRU网络中提取上下文关系特征,并通过多头注意力机制调整权值强化重要特征表达,最后使用softmax分类器进行文本分类。通过与其他主流方法进行对比论证,实验表明,论文提出的方法在短文本分类效果上有显著提升。  相似文献   

10.
张虎  柏萍 《计算机科学》2022,49(2):279-284
随着图神经网络技术在自然语言处理领域中的广泛应用,基于图神经网络的文本分类研究受到了越来越多的关注,文本构图是图神经网络应用到文本分类中的一项重要研究任务,已有方法在构图时通常不能有效捕获句子中远距离词语的依赖关系.短文本分类是待分类文本中普遍较短的一类特殊文本分类任务,传统的文本表示通常比较稀疏且缺乏丰富的语义信息....  相似文献   

11.
Existing attempts to automate construction document analysis are limited in understanding the varied semantic properties of different documents. Due to the semantic conflicts, the construction specification review process is still conducted manually in practice despite the promising performance of the existing approaches. This research aimed to develop an automated system for reviewing construction specifications by analyzing the different semantic properties using natural language processing techniques. The proposed method analyzed varied semantic properties of 56 different specifications from five different countries in terms of vocabulary, sentence structure, and the organizing styles of provisions. First, the authors developed a semantic thesaurus for construction terms including 208 word-replacement rules based on Word2Vec embedding to understand the different vocabularies. Second, the authors developed a named entity recognition model based on bi-directional long short-term memory with a conditional random field layer, which identified the required keywords from given provisions with an averaged F1 score of 0.928. Third, the authors developed a provision-pairing model based on Doc2Vec embedding, which identified the most relevant provisions with an average accuracy of 84.4%. The web-based prototype demonstrated that the proposed system can facilitate the construction specification review process by reducing the time spent, supplementing the reviewer’s experience, enhancing accuracy, and achieving consistency. The results contribute to risk management in the construction industry, with practitioners being able to review construction specifications thoroughly in spite of tight schedules and few available experts.  相似文献   

12.
Today, construction planning and scheduling is almost always performed manually, by experienced practitioners. The knowledge of those individuals is materialized, maintained, and propagated through master schedules and look-ahead plans. While historical project schedules are available, manually mining their embedded knowledge to create generic work templates for future projects or revising look-ahead schedules is very difficult, time-consuming and error-prone. The rigid work templates from prior research are also not scalable to cover the inter and intra-class variability in historical schedule activities. This paper aims at fulfilling these needs via a new method to automatically learn construction knowledge from historical project planning and scheduling records and digitize such knowledge in a flexible and generalizable data schema. Specifically, we present Dynamic Process Templates (DPTs) based on a novel vector representation for construction activities where the sequencing knowledge is modeled with generative Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs). Our machine learning models are exhaustively tested and validated on a diverse dataset of 32 schedules obtained from real-world projects. The experimental results show our method is capable of learning planning and sequencing knowledge at high accuracy across different projects. The benefits for automated project planning and scheduling, schedule quality control, and automated generation of project look-aheads are discussed in detail.  相似文献   

13.
Instance selection aims at filtering out noisy data (or outliers) from a given training set, which not only reduces the need for storage space, but can also ensure that the classifier trained by the reduced set provides similar or better performance than the baseline classifier trained by the original set. However, since there are numerous instance selection algorithms, there is no concrete winner that is the best for various problem domain datasets. In other words, the instance selection performance is algorithm and dataset dependent. One main reason for this is because it is very hard to define what the outliers are over different datasets. It should be noted that, using a specific instance selection algorithm, over-selection may occur by filtering out too many ‘good’ data samples, which leads to the classifier providing worse performance than the baseline. In this paper, we introduce a dual classification (DuC) approach, which aims to deal with the potential drawback of over-selection. Specifically, performing instance selection over a given training set, two classifiers are trained using both a ‘good’ and ‘noisy’ sets respectively identified by the instance selection algorithm. Then, a test sample is used to compare the similarities between the data in the good and noisy sets. This comparison guides the input of the test sample to one of the two classifiers. The experiments are conducted using 50 small scale and 4 large scale datasets and the results demonstrate the superior performance of the proposed DuC approach over the baseline instance selection approach.  相似文献   

14.
文本分类是自然语言处理的一个重要领域.近年来,深度学习的方法被广泛应用于文本分类任务中.在处理大规模的数据时,为了兼顾分类的精度和处理效率,本文使用BERT训练词向量作为嵌入层,进一步优化输入语句的词向量,然后用双层的GRU网络作为主体网络,充分提取文本的上下文特征,最后使用注意力机制,将目标语句重点突出,进行文本分类...  相似文献   

15.
Traffic classification groups similar or related traffic data, which is one main stream technique of data fusion in the field of network management and security. With the rapid growth of network users and the emergence of new networking services, network traffic classification has attracted increasing attention. Many new traffic classification techniques have been developed and widely applied. However, the existing literature lacks a thorough survey to summarize, compare and analyze the recent advances of network traffic classification in order to deliver a holistic perspective. This paper carefully reviews existing network traffic classification methods from a new and comprehensive perspective by classifying them into five categories based on representative classification features, i.e., statistics-based classification, correlation-based classification, behavior-based classification, payload-based classification, and port-based classification. A series of criteria are proposed for the purpose of evaluating the performance of existing traffic classification methods. For each specified category, we analyze and discuss the details, advantages and disadvantages of its existing methods, and also present the traffic features commonly used. Summaries of investigation are offered for providing a holistic and specialized view on the state-of-art. For convenience, we also cover a discussion on the mostly used datasets and the traffic features adopted for traffic classification in the review. At the end, we identify a list of open issues and future directions in this research field.  相似文献   

16.
Offline/realtime traffic classification using semi-supervised learning   总被引:4,自引:0,他引:4  
Jeffrey  Anirban  Martin  Ira  Carey 《Performance Evaluation》2007,64(9-12):1194-1213
Identifying and categorizing network traffic by application type is challenging because of the continued evolution of applications, especially of those with a desire to be undetectable. The diminished effectiveness of port-based identification and the overheads of deep packet inspection approaches motivate us to classify traffic by exploiting distinctive flow characteristics of applications when they communicate on a network. In this paper, we explore this latter approach and propose a semi-supervised classification method that can accommodate both known and unknown applications. To the best of our knowledge, this is the first work to use semi-supervised learning techniques for the traffic classification problem. Our approach allows classifiers to be designed from training data that consists of only a few labeled and many unlabeled flows. We consider pragmatic classification issues such as longevity of classifiers and the need for retraining of classifiers. Our performance evaluation using empirical Internet traffic traces that span a 6-month period shows that: (1) high flow and byte classification accuracy (i.e., greater than 90%) can be achieved using training data that consists of a small number of labeled and a large number of unlabeled flows; (2) presence of “mice” and “elephant” flows in the Internet complicates the design of classifiers, especially of those with high byte accuracy, and necessitates the use of weighted sampling techniques to obtain training flows; and (3) retraining of classifiers is necessary only when there are non-transient changes in the network usage characteristics. As a proof of concept, we implement prototype offline and realtime classification systems to demonstrate the feasibility of our approach.  相似文献   

17.
使用概念基元特征进行自动文本分类   总被引:2,自引:0,他引:2  
自动文本分类技术是大规模文档数据处理的关键技术,在文本分类过程中通常先进行文本表示,即把文本转化为特征向量,这其中常用的特征有特征词、词频、N-gram等等。论文研究了一种新的特征,即词语的HNC概念符号。词语的HNC概念符号来自于HNC(概念层次网络,HierarchicalNetworkofConcepts)建立的语义网络,以符号表达式的方式表示了词语的语义信息。因此使用HNC概念符号作为特征实际上是以文本中蕴含的语义信息作为特征,和词频等使用文本表层信息的特征有本质的不同。采用最大熵模型的方法建立分类器,以分词和HNC概念符号作为特征进行了研究,并对分类结果进行了比较。结果表明,HNC特征优于分词特征。  相似文献   

18.
从漏洞信息当中抽取结构化信息对于安全研究而言有重要意义。安全研究者常需要在大规模的CVE数据中按特定要求进行筛选,或对漏洞进行自动化的分析测试。然而现有的CVE数据库中只包含了非结构化的文本描述和并不完备的辅助信息。从描述文本抽取结构化的信息能帮助研究者更好地组织与分析CVE。总结漏洞描述包含的七种核心要素,为结构化抽取建立模型,并将信息抽取转换为一个序列标注模型,构建数据集对其进行训练。实验表明,该模型能够以较高的准确率从CVE文本中抽取出各类关键信息。  相似文献   

19.
Hierarchical Dirichlet process (HDP) is an unsupervised method which has been widely used for topic extraction and document clustering problems. One advantage of HDP is that it has an inherent mechanism to determine the total number of clusters/topics. However, HDP has three weaknesses: (1) there is no mechanism to use known labels or incorporate expert knowledge into the learning procedure, thus precluding users from directing the learning and making the final results incomprehensible; (2) it cannot detect the categories expected by applications without expert guidance; (3) it does not automatically adjust the model parameters and structure in a changing environment. To address these weaknesses, this paper proposes an incremental learning method, with partial supervision for HDP, which enables the topic model (initially guided by partial knowledge) to incrementally adapt to the latest available information. An important contribution of this work is the application of granular computing to HDP for partial-supervision and incremental learning which results in a more controllable and interpretable model structure. These enhancements provide a more flexible approach with expert guidance for the model learning and hence results in better prediction accuracy and interpretability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号