首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
文本分类任务作为文本挖掘的核心问题,已成为自然语言处理领域的一个重要课题.而短文本分类由于稀疏性、实时性和不规范性等特点,已经成为文本分类的亟待解决的问题之一.在某些特定的场景,短文本存在大量隐含语义,由此对挖掘有限文本内的隐含语义特征等任务带来挑战.已有的方法对短文本分类主要是采用传统机器学习或深度学习算法,但是该类算法的模型构建复杂且工作量大,效率不高.此外,短文本包含有效信息较少且口语化严重,对模型的特征学习能力要求较高.针对以上问题,本文提出了KAeRCNN模型,该模型在TextRCNN模型的基础上,融合了知识感知与双重注意力机制.知识感知包含了知识图谱实体链接和知识图谱嵌入,可以引入外部知识以获取语义特征,同时双重注意力机制可以提高模型对短文本中有效信息提取的效率.实验结果表明,KAeRCNN模型在分类准确度、F1值和实际应用效果等方面显著优于传统的机器学习算法.我们对算法的性能和适应性进行了验证,准确率达到95.54%,F1值达到0.901,对比四种传统机器学习算法,准确率平均提高了约14%,F1值提升了约13%.与TextRCNN相比,KAeRCNN模型在准确性方面提升了约3%.此外,与深度学习算法的对比实验结果也说明了我们的模型在其它领域的短文本分类中也有较好的表现.理论和实验结果都证明,提出的KAeRCNN模型对短文本分类效果更优.  相似文献   

2.
KNN算法是文本自动分类领域中的一种常用算法,对于低维度的文本分类,其分类准确率较高。然而在处理大量高维度文本时,传统KNN算法由于需处理大量训练样本导致样本相似度的计算量增加,降低了分类效率。为解决相关问题,本文首先利用粗糙集对高维文本信息进行属性约简,删除冗余属性,而后用改进的基于簇的KNN算法进行文本分类。通过仿真实验,证明该方法能够提高文本的分类精度和准确率。  相似文献   

3.
Manga, a Japanese word for comics, is a worldwide popular visual entertainment. Nowadays, electronic devices boost the fast development of motion manga for the purpose of visual richness and manga promotion. To convert static manga images into motion mangas, text balloons are usually animated individually for better story telling. This needs the artists to cut out each text balloon meticulously, and therefore it is quite labor-intensive and time-consuming. In this paper, we propose an automatic approach that can extract text balloons from manga images both accurately and effectively. Our approach starts by extracting white areas that contain texts as text blobs. Different from existing text blob extraction methods that only rely on shape properties, we incorporate text properties in order to differentiate text blobs from texture blobs. Instead of heuristic parameter thresholding, we achieve text blob classification via learning-based classifiers. Along with the novel text blob classification method, we also make the first attempt in trying to tackle the boundary issue in balloon extraction. We apply our method on various styles of mangas and comics with texts in different languages, and convincing results are obtained in all cases.  相似文献   

4.
近年来,使用机器学习算法从导游投诉文本数据中识别出导游违规行为,辅助旅游监管人员工作,为旅游监管提供依据,成为一个必然趋势。然而导游投诉文本存在着语料单一、难以获取等困难,如何对这些导游投诉文本进行文本增强以满足导游违规行为识别需要,是一个迫切需要解决的问题。针对这一问题,提出了一种基于EDA(easy data augmentation)和回译的导游投诉文本混合增强方法。从EDA和回译两个角度对导游投诉文本进行增强,将两种方法返回的增强投诉语料进行混合,得到最终的增强文本;并将该方法在实际的导游违规行为识别系统中进行了应用与验证。通过大量实验对该方法与传统的EDA文本增强方法、回译文本增强方法进行了分析与对比,实验数据表明,基于EDA和回译的导游投诉文本混合增强方法相对于其他两种传统文本增强方法具有更高的准确率和更优秀的文本增强效果,应用在实际的导游违规行为识别系统中得到了87.54%的准确率,相比原始数据集准确率提升了7.4%。  相似文献   

5.
本文针对实际党建领域中的新闻标题进行自动生成,提出了一种融合指针网络的自动文本摘要模型-Tri-PCN.相比于传统基于编码器-解码器框架的自动文本摘要模型,党建新闻标题生成模型还需要满足(1)从更长的文本序列提取特征;(2)保留关键的党建信息.针对党建新闻比普通文本摘要任务面临更长文本序列问题,论文使用Transformer模型在解码阶段提取多层次全局文本特征.针对党建新闻标题生成过程中需要保留关键的党建信息,论文引入指针生成网络模型的复制机制在新闻标题生成时可以直接从新闻文本中复制关键词信息.实验采用ROUGE值作为评测指标,结果表明本文提出的Tri-PCN模型在党建新闻领域自动文本摘要任务上效果明显优于基准模型,比其他模型具有更好的效果.  相似文献   

6.
以解决公安部门犯罪信息文本数据自动分类问题为应用目的,通过对已有多层文本自动分类技术的研究,给出多层文本分类器的过程模型,并对模型中的特征提取方法进行深入研究,提出改进后的特征提取权重计算公式。实验证明,该分类器能够有效解决犯罪信息文本的自动分类问题。  相似文献   

7.
Named-entity recognition (NER) involves the identification and classification of named entities in text. This is an important subtask in most language engineering applications, in particular information extraction, where different types of named entity are associated with specific roles in events. In this paper, we present a prototype NER system for Greek texts that we developed based on a NER system for English. Both systems are evaluated on corpora of the same domain and of similar size. The time-consuming process for the construction and update of domain-specific resources in both systems led us to examine a machine learning method for the automatic construction of such resources for a particular application in a specific language.  相似文献   

8.
基于大规模语料训练的语言模型,在文本生成任务上取得了突出性能表现.然而研究发现,这类语言模型在受到扰动时可能会产生攻击性的文本.这种不确定的攻击性给语言模型的研究和实际使用带来了困难,为了避免风险,研究人员不得不选择不公开论文的语言模型.因此,如何自动评价语言模型的攻击性成为一项亟待解决的问题.针对该问题,该文提出了一...  相似文献   

9.
在文本分类任务中,由于短文本具有特征稀疏,用词不规范等特点,传统的自然语言处理方法在短文本分类中具有局限性.针对短文本的特点,本文提出一种基于BERT(bidirectional encoder representations from Transformers)与GSDMM(collapsed Gibbs sampling algorithm for the Dirichlet multinomial mixture model)融合和聚类指导的短文本分类算法,用以提高短文本分类有效性与准确性.本算法一方面通过BERT与GSDMM融合模型将短文本转化为集成语义向量,集成的向量体现了全局语义特征与主题特征,解决了短文本特征稀疏与主题信息匮乏的问题.另一方面在分类器前端训练中通过引入聚类指导算法实现对标注数据的扩展,同时也提升了结果的可解释性.最后利用扩展后的标注数据集训练分类器完成对短文本的自动化分类.将电商平台的差评数据作为验证数据集,在多组对比实验中验证了本算法在短文本分类方面应用的有效性与优势.  相似文献   

10.
Examining past near-miss reports can provide us with information that can be used to learn about how we can mitigate and control hazards that materialise on construction sites. Yet, the process of analysing near-miss reports can be a time-consuming and labour-intensive process. However, automatic text classification using machine learning and ontology-based approaches can be used to mine reports of this nature. Such approaches tend to suffer from the problem of weak generalisation, which can adversely affect the classification performance. To address this limitation and improve classification accuracy, we develop an improved deep learning-based approach to automatically classify near-miss information contained within safety reports using Bidirectional Transformers for Language Understanding (BERT). Our proposed approach is designed to pre-train deep bi-directional representations by jointly extracting context features in all layers. We validate the effectiveness and feasibility of our approach using a database of near-miss reports derived from actual construction projects that were used to train and test our model. The results demonstrate that our approach can accurately classify ‘near misses’, and outperform prevailing state-of-the-art automatic text classification approaches. Understanding the nature of near-misses can provide site managers with the ability to identify work-areas and instances where the likelihood of an accident may occur.  相似文献   

11.
基于超球支持向量机的兼类文本分类算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
针对兼类文本,提出了一种分类算法。对属于同一类别的文本,利用超球支持向量机在特征空间中求得一个能包围该类尽可能多文本的最小超球,使各类文本之间通过超球分隔开,达到分类效果。对待分类文本,计算它到各超球球心的距离,根据距离判定该文本所属的类别。实验结果证明,该算法不仅具有较快的分类速度,而且具有较高的分类精度。  相似文献   

12.
Ontology learning (OL) from texts has been suggested as a technology that helps to reduce the bottleneck of knowledge acquisition in the construction of domain ontologies. In this learning process, the discovery, and possibly also labeling, of non-taxonomic relationships has been identified as one of the most difficult and often neglected problems. In this paper, we propose a technique that addresses this issue by analyzing a domain text corpus to extract verbs frequently applied for linking certain pairs of concepts. Integrated in an ontology building process, this technique aims to reduce the work-load of knowledge engineers and domain experts by suggesting candidate relationships that might become part of the ontology as well as prospective labels for them.  相似文献   

13.
Short texts are typically composed of small number of words, most of which are abbreviations, typos and other kinds of noise. This makes the noise to signal ratio relatively high for this specific category of text. A high proportion of noise in the data is undesirable for analysis procedures as well as machine learning applications. Text normalization techniques are used to reduce the noise and improve the quality of text for processing and analysis purposes. In this work, we propose a combination of statistical and rule-based techniques to normalize short texts. More specifically, we focus our attention on SMS messages. We base our normalization approach on a statistical machine translation system which translates from noisy data to clean data. This system is trained on a small manually annotated set. Then, we study several automatic methods to extract more general rules from the normalizations generated with the statistical machine translation system. We illustrate the proposed methodology by conducting some experiments with a SMS Haitian-Créole data collection. In order to evaluate the performance of our methodology we use several Haitian-Créole dictionaries, the well-known perplexity criteria and the achieved reduction of vocabulary.  相似文献   

14.
To push the state of the art in text mining applications, research in natural language processing has increasingly been investigating automatic irony detection, but manually annotated irony corpora are scarce. We present the construction of a manually annotated irony corpus based on a fine-grained annotation scheme that allows for identification of different types of irony. We conduct a series of binary classification experiments for automatic irony recognition using a support vector machine (SVM) that exploits a varied feature set and compare this method to a deep learning approach that is based on an LSTM network and (pre-trained) word embeddings. Evaluation on a held-out corpus shows that the SVM model outperforms the neural network approach and benefits from combining lexical, semantic and syntactic information sources. A qualitative analysis of the classification output reveals that the classifier performance may be further enhanced by integrating implicit sentiment information and context- and user-based features.  相似文献   

15.
Deep Web自动分类是建立深网数据集成系统的前提和基础。提出了一种基于领域特征文本的Deep Web分类方法。首先借助本体知识对表达同一语义的不同词汇进行了概念抽象,进而给出了领域相关度的定义,并将其作为特征文本选择的量化标准,避免了人为选取的主观性和不确定性;在接口向量模型构建中,考虑了不同特征文本对于分类作用的差异,提出了一种改进的W-TFIDF权重计算方法;最后采用KNN算法对接口向量进行了分类。对比实验证明,利用所提方法选择的特征文本是准确有效的,新的特征文本权重计算方法能显著地提高分类精度,且在KNN算法中表现出较好的稳定性。  相似文献   

16.
Enriching short text representation in microblog for clustering   总被引:1,自引:0,他引:1  
Social media websites allow users to exchange short texts such as tweets via microblogs and user status in friendship networks. Their limited length, pervasive abbreviations, and coined acronyms and words exacerbate the problems of synonymy and polysemy, and bring about new challenges to data mining applications such as text clustering and classification. To address these issues, we dissect some potential causes and devise an efficient approach that enriches data representation by employing machine translation to increase the number of features from different languages. Then we propose a novel framework which performs multi-language knowledge integration and feature reduction simultaneously through matrix factorization techniques. The proposed approach is evaluated extensively in terms of effectiveness on two social media datasets from Facebook and Twitter. With its significant performance improvement, we further investigate potential factors that contribute to the improved performance.  相似文献   

17.
文本分类为一个文档自动分配一组预定义的类别或主题。文本分类中,文档的表示对学习机的学习性能有很大的影响。以实现哈萨克语文本分类为目的,根据哈萨克语语法规则设计实现哈萨克语文本的词干提取,完成哈萨克语文本的预处理。提出基于最近支持向量机的样本距离公式,避免k参数的选定,以SVM与KNN分类算法的特殊组合算法(SV-NN)实现了哈萨克语文本的分类。结合自己构建的哈萨克语文本语料库的语料进行文本分类仿真实验,数值实验展示了提出算法的有效性并证实了理论结果。  相似文献   

18.
中文文本中抽取特征信息的区域与技术   总被引:30,自引:3,他引:30  
本文探讨了各种从中文文本中抽取特征信息的区域和技术。本文以新闻语料、科技论文、公文类文献为例,详细论述了从各类文本中抽取特征信息的区域与技术,对科技论文,还给出了一些可操作的产生式规则。无论对自动标引、自动分类,还是自动文摘的研究者而言,本文的方法与结论都有一定的参考价值。  相似文献   

19.
Automatic patient thought record categorization (TR) is important in cognitive behavior therapy, which is an useful augmentation of standard clinic treatment for major depressive disorder. Because both collecting and labeling TR data are expensive, it is usually cost prohibitive to require a large amount of TR data, as well as their corresponding category labels, to train a classification model with high classification accuracy. Because in practice we only have very limited amount of labeled and unlabeled training TR data, traditional semi-supervised learning methods and transfer learning methods, which are the most commonly used strategies to deal with the lack of training data in statistical learning, cannot work well in the task of automatic TR categorization. To address this challenge, we propose to tackle the TR categorization problem from a new perspective via self-taught learning, an emerging technique in machine learning. Self-taught learning is a special type of transfer learning. Instead of requiring labeled data from an auxiliary domain that are relevant to the classification task of interest as in traditional transfer learning methods, it learns the inherent structures of the auxiliary data and does not require their labels. As a result, a classifier achieves decent classification accuracy using the limited amount of labeled TR texts, with the assistance from the large amount of text data obtained from some inexpensive, or even no-cost, resources. That is, a cost-effective TR categorization system can be built that may be particularly useful for diagnosis of patients and training of new therapists. By further taking into account the discrete nature input text data, instead of using the traditional Gaussian sparse coding in self-taught learning, we use exponential family sparse coding to better simulate the distribution of the input data. We apply the proposed method to the task of classifying patient homework texts. Experimental results show the effectiveness of the proposed automatic TR classification framework.  相似文献   

20.
生成语言的质量评价很大程度上影响着自然语言生成的研究,已成为制约该领域发展的瓶颈问题.通过对机器翻译、自动文摘、对话系统、图像标题生成和机器写作等广义自然语言生成任务的语言质量评价方法的汇总,介绍了人工评价和自动评价的特点、优缺点和开放评价资源,分析了不同任务的不同评价角度和适用面.不同评价方法的对比分析,可为方法融合...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号