首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
中文命名实体识别常使用字符嵌入作为神经网络模型的输入,但是中文没有明确的词语边界,字符嵌入的方法会导致部分语义信息的丢失。针对此问题,该文提出了一种基于多颗粒度文本表征的中文命名实体识别模型。首先,在模型输入端结合了字词表征,然后借助N-gram编码器挖掘N-gram中潜在的成词信息,有效地联合了三种不同颗粒度的文本表征,丰富了序列的上下文表示。该文在Weibo、Resume和OntoNotes4数据集上进行了实验,实验结果的F1值分别达到了72.41%、96.52%、82.83%。与基准模型相比,该文提出的模型具有更好的性能。  相似文献   

2.
汪琪  段湘煜 《计算机科学》2018,45(11):226-230
现有神经机器翻译模型普遍采用的注意力机制是基于单词级别的,文中通过在注意力机制上执行多层卷积,从而将注意力机制从基于单词的级别提高到基于短语的级别。经过卷积操作后的注意力信息将愈加明显地体现出短语结构性,并被用于生成新的上下文向量,从而将新生成的上下文向量融入到神经机器翻译框架中。在大规模的中-英测试数据集上的实验结果表明,基于注意力卷积的神经机翻译模型能够很好地捕获语句中的短语结构信息,增强翻译词前后的上下文依赖关系,优化上下文向量,提高机器翻译的性能。  相似文献   

3.
针对现有会话推荐算法未充分考虑用户的上下文信息的现状,为增强基于会话的推荐算法的个性化推荐效果,提出一种融合用户会话数据的上下文感知推荐算法。将上下文信息通过embedding映射成低维实数向量特征,通过Add、Stack、MLP三种组合方式将低维向量特征融入到基于会话的循环神经网络推荐模型,设计了基于BPR的损失函数动态刻画会话点击序列的用户偏好,以提升个性化推荐能力。在Adressa数据集上的实验表明,所提算法相比基线算法GRU4REC,在指标Recall@20上提高了3.2%,MRR@20上提高了27%。  相似文献   

4.
为科技论文生成自动摘要,这能够帮助作者更快撰写摘要,是自动文摘的研究内容之一.相比于常见的新闻文档,科技论文具有文档结构性强、逻辑关系明确等特点.目前,主流的编码-解码的生成式文摘模型主要考虑文档的序列化信息,很少深入探究文档的篇章结构信息.为此,文中针对科技论文的特点,提出了一种基于"单词-章节-文档"层次结构的自动摘要模型,利用单词与章节的关联作用增强文本结构的层次性和层级之间的交互性,从而筛选出科技论文的关键信息.除此之外,该模型还扩充了一个上下文门控单元,旨在更新优化上下文向量,从而能更全面地捕获上下文信息.实验结果表明,提出的模型可有效提高生成文摘在ROUGE评测方法上的各项指标性能.  相似文献   

5.
为科技论文生成自动摘要,这能够帮助作者更快撰写摘要,是自动文摘的研究内容之一.相比于常见的新闻文档,科技论文具有文档结构性强、逻辑关系明确等特点.目前,主流的编码-解码的生成式文摘模型主要考虑文档的序列化信息,很少深入探究文档的篇章结构信息.为此,文中针对科技论文的特点,提出了一种基于"单词-章节-文档"层次结构的自动摘要模型,利用单词与章节的关联作用增强文本结构的层次性和层级之间的交互性,从而筛选出科技论文的关键信息.除此之外,该模型还扩充了一个上下文门控单元,旨在更新优化上下文向量,从而能更全面地捕获上下文信息.实验结果表明,提出的模型可有效提高生成文摘在ROUGE评测方法上的各项指标性能.  相似文献   

6.
针对中文文档摘要领域存在的缺少可靠数据集,有监督的摘要模型不成熟的问题,构建了一个规模超过20万篇的中文文档级别的摘要语料库(Chinese Document-level Extractive Summarization Dataset,CDESD),提出了一种有监督的文档级别抽取式摘要模型(Document Summarization with SPA Sentence Embedding,DSum-SSE)。该模型以神经网络为基础的框架,使用结合了Pointer和注意力机制的端到端框架解决句子级别的生成式摘要问题,以获得反映句子核心含义的表示向量,然后在此基础上引入极端的Pointer机制,完成文档级别抽取式摘要算法。实验表明,相比于无监督的单文档摘要算法--TextRank,DSum-SSE有能力提供更高质量的摘要。CDESD和DSum-SSE分别对中文文档级别摘要领域的语料数据和模型做了很好的补充。  相似文献   

7.
提出一种基于句子相似度的论文抄袭检测模型。利用局部词频指纹算法对大规模文档进行快速检测,找出疑似抄袭文档。根据最长有序公共子序列算法计算句子间的相似度,并标注抄袭细节,给出抄袭依据。在标准中文数据集SOGOU-T上进行的实验表明,该模型具有较强的局部信息挖掘能力,在一定程度上克服了现有的论文抄袭检测算法精度不高的缺点。  相似文献   

8.
基于句子级别的抽取方法不足以解决中文事件元素分散问题。针对该问题,提出基于上下文融合的文档级事件抽取方法。首先将文档分割为多个段落,利用双向长短期记忆网络提取段落序列特征;其次采用自注意力机制捕获段落上下文的交互信息;然后与文档序列特征融合以更新语义表示;最后采用序列标注方式抽取事件元素并匹配事件类型。与其他事件抽取方法在相同的中文数据集上进行对比,实验结果表明,该方法能有效抽取文档中分散的事件元素,并提升模型的抽取性能。  相似文献   

9.
关系抽取是构建知识图谱的基础,而中文关系抽取也是关系抽取中的难点问题,现有的中文关系抽取大多采用基于字符特征或者词特征的方法,但是前者无法捕获字符上下文的信息而后者受制于分词质量,导致中文关系抽取的性能较低。针对该问题,提出了基于多层次语义感知的中文关系抽取模型,该模型利用实体间丰富的语义信息来提高实体对关系预测的性能。多层次语义感知体现在以下三个方面:首先,利用ERNIE预训练语言模型将文本信息转化为动态词向量;然后,利用注意力机制增强实体所在句子的语义表示,同时通过外部知识尽可能地消除实体词的中文歧义;最后,将包含多层语义感知的句子表示放入到分类中进行预测。实验结果表明,所提模型在中文关系抽取的性能上优于已有模型,且更具解释性。  相似文献   

10.
方面情感分析旨在预测句子或文档中一个特定方面的情感极性,现阶段大部分的研究都是使用注意力机制对上下文进行建模。然而,目前情感分类模型在使用BERT模型计算表征之间的依赖关系抽取特征时,大多未根据不同的语境背景考虑上下文信息,导致建模后的特征缺乏上下文的语境信息。同时,方面词的重要性未能得到充分的重视而影响模型整体分类的性能。针对上述问题,提出双特征融合注意力方面情感分析模型(DFLGA-BERT),分别设计了局部与全局的特征抽取模块,充分捕捉方面词和上下文的语义关联。并将一种改进的“准”注意力添加到DFLGA-BERT的全局特征抽取器中,使模型学习在注意力的融合中使用减性注意力以削弱噪声产生的负面影响。基于条件层规泛化(CLN)设计了局部特征和全局特征的特征融合结构来更好地融合局部和全局特征。在SentiHood和SemEval 2014 Task 4数据集上进行了实验,实验结果表明,与基线模型相比该模型在融入了上下文语境特征后取得了较明显的性能提升。  相似文献   

11.
Most existing Information Retrieval model including probabilistic and vector space models are based on the term independence hypothesis. To go beyond this assumption and thereby capture the semantics of document and query more accurately, several works have incorporated phrases or other syntactic information in IR, such attempts have shown slight benefit, at best. Particularly in language modeling approaches this extension is achieved through the use of the bigram or n-gram models. However, in these models all bigrams/n-grams are considered and weighted uniformly. In this paper we introduce a new approach to select and weight relevant n-grams associated with a document. Experimental results on three TREC test collections showed an improvement over three strongest state-of-the-art model baselines, which are the original unigram language model, the Markov Random Field model, and the positional language model.  相似文献   

12.
Previous work on statistical language modeling has shown that it is possible to train a feedforward neural network to approximate probabilities over sequences of words, resulting in significant error reduction when compared to standard baseline models based on n-grams. However, training the neural network model with the maximum-likelihood criterion requires computations proportional to the number of words in the vocabulary. In this paper, we introduce adaptive importance sampling as a way to accelerate training of the model. The idea is to use an adaptive n-gram model to track the conditional distributions produced by the neural network. We show that a very significant speedup can be obtained on standard problems.  相似文献   

13.
Statistical n-gram language modeling is popular for speech recognition and many other applications. The conventional n-gram suffers from the insufficiency of modeling long-distance language dependencies. This paper presents a novel approach focusing on mining long distance word associations and incorporating these features into language models based on linear interpolation and maximum entropy (ME) principles. We highlight the discovery of the associations of multiple distant words from training corpus. A mining algorithm is exploited to recursively merge the frequent word subsets and efficiently construct the set of association patterns. By combining the features of association patterns into n-gram models, the association pattern n-grams are estimated with a special realization to trigger pair n-gram where only the associations of two distant words are considered. In the experiments on Chinese language modeling, we find that the incorporation of association patterns significantly reduces the perplexities of n-gram models. The incorporation using ME outperforms that using linear interpolation. Association pattern n-gram is superior to trigger pair n-gram. The perplexities are further reduced using more association steps. Further, the proposed association pattern n-grams are not only able to elevate document classification accuracies but also improve speech recognition rates.  相似文献   

14.
韩京宇  杨健 《计算机应用》2014,34(12):3475-3480
针对目前基于倒排表的图关键字索引不能有效处理多个关键字查询,也不能对关键字拼写容错的问题,提出一种位图和局部敏感哈希(BLH)相结合的双层索引来支持图的多关键字查询:上层构建位图,依据关键字组合的n-gram映射到子图类簇,每个类簇存储相似的子图;下层在每个类簇上构建局部敏感哈希索引,根据关键字组合的n-gram定位到包含关键字组合的子图。该方法可显著减少图上关键字查询的I/O,查询时间缩减80%;并且,基于n-gram构建索引,可以避免索引对拼写错误敏感,在关键字容错的前提下返回用户期望的结果。实际数据集上的实验结果表明BLH索引的有效性,可以支持万维网、社会网络的高效查询。  相似文献   

15.
Malware classification using machine learning algorithms is a difficult task, in part due to the absence of strong natural features in raw executable binary files. Byte n-grams previously have been used as features, but little work has been done to explain their performance or to understand what concepts are actually being learned. In contrast to other work using n-gram features, in this work we use orders of magnitude more data, and we perform feature selection during model building using Elastic-Net regularized Logistic Regression. We compute a regularization path and analyze novel multi-byte identifiers. Through this process, we discover significant previously unreported issues with byte n-gram features that cause their benefits and practicality to be overestimated. Three primary issues emerged from our work. First, we discovered a flaw in how previous corpora were created that leads to an over-estimation of classification accuracy. Second, we discovered that most of the information contained in n-grams stem from string features that could be obtained in simpler ways. Finally, we demonstrate that n-gram features promote overfitting, even with linear models and extreme regularization.  相似文献   

16.
基于记忆的自适应语言模型虽然在一定程度上增强了语言模型对不同领域的适应性,但其假设过于简单,即认为一个在文章的前面部分出现过的词往往会在后面重复出现。通过对一些文本的观察分析,我们认为作者在书写文章的时候,除了常常使用前文中出现过的词汇外,为了避免用词单调,还会在行文过程中使用前文出现过词汇的近义词或者同义词。另外,一篇文章总是围绕某个主题展开,所以在文章中出现的许多词汇往往在语义上有很大的相关性。我们对基于记忆的语言模型进行了扩展,利用汉语义类词典,将与缓存中所保留词汇语义上相近或者相关的词汇也引入缓存。实验表明这种改进在很大程度上提高了原有模型的性能,与n元语言模型相比困惑度下降了4011% ,有效地增强了语言模型的自适应性。  相似文献   

17.
Language models are crucial for many tasks in NLP (Natural Language Processing) and n-grams are the best way to build them. Huge effort is being invested in improving n-gram language models. By introducing external information (morphology, syntax, partitioning into documents, etc.) into the models a significant improvement can be achieved. The models can however be improved with no external information and smoothing is an excellent example of such an improvement.In this article we show another way of improving the models that also requires no external information. We examine patterns that can be found in large corpora by building semantic spaces (HAL, COALS, BEAGLE and others described in this article). These semantic spaces have never been tested in language modeling before. Our method uses semantic spaces and clustering to build classes for a class-based language model. The class-based model is then coupled with a standard n-gram model to create a very effective language model.Our experiments show that our models reduce the perplexity and improve the accuracy of n-gram language models with no external information added. Training of our models is fully unsupervised. Our models are very effective for inflectional languages, which are particularly hard to model. We show results for five different semantic spaces with different settings and different number of classes. The perplexity tests are accompanied with machine translation tests that prove the ability of proposed models to improve performance of a real-world application.  相似文献   

18.
It is foreseen that more and more music objects in symbolic format and multimedia objects, such as audio, video, or lyrics, integrated with symbolic music representation (SMR) will be published and broadcasted via the Internet. The SMRs of the flowing songs or multimedia objects will form a music stream. Many interesting applications based on music streams, such as interactive music tutorials, distance music education, and similar theme searching, make the research of content-based retrieval over music streams much important. We consider multiple queries with error tolerances over music streams and address the issue of approximate matching in this environment. We propose a novel approach to continuously process multiple queries over the music streams for finding all the music segments that are similar to the queries. Our approach is based on the concept of n-grams, and two mechanisms are designed to reduce the heavy computation of approximate matching. One mechanism uses the clustering of query n-grams to prune the query n-grams that are irrelevant to the incoming data n-gram. The other mechanism records the data n-gram that matches a query n-gram as a partial answer and incrementally merges the partial answers of the same query. We implement a prototype system for experiments in which songs in the MIDI format are continuously broadcasted, and the user can specify musical segments as queries to monitor the music streams. Experiment results show the effectiveness and efficiency of the proposed approach.  相似文献   

19.
n-Gram Statistics for Natural Language Understanding and Text Processing   总被引:1,自引:0,他引:1  
n-gram (n = 1 to 5) statistics and other properties of the English language were derived for applications in natural language understanding and text processing. They were computed from a well-known corpus composed of 1 million word samples. Similar properties were also derived from the most frequent 1000 words of three other corpuses. The positional distributions of n-grams obtained in the present study are discussed. Statistical studies on word length and trends of n-gram frequencies versus vocabulary are presented. In addition to a survey of n-gram statistics found in the literature, a collection of n-gram statistics obtained by other researchers is reviewed and compared.  相似文献   

20.
莫秀云  陈俊洪  杨振国  刘文印 《机器人》2022,44(2):186-194+202
为了提高机器人学习技能的能力,免除人工示教过程,本文基于对无特殊标记的人类演示视频的观察,提出了一种基于序列到序列模式的机器人指令自动生成框架。首先,使用Mask R-CNN(区域卷积神经网络)来缩小操作区域的范围,并采用双流I3D网络(膨胀3D卷积网络)从视频中提取光流特征和RGB特征;其次,引入双向LSTM(长短期记忆)网络从先前提取的特征中获取上下文信息;最后,使用自我注意力机制和全局注意力机制,学习视频帧序列和命令序列的关联性,序列到序列模型最终输出机器人的命令。在扩展后的MPII烹饪活动2数据集和IIT-V2C数据集上进行了大量的实验,与现有的方法进行比较,本文提出的方法在BLEU_4(0.705)和METEOR(0.462)等指标上达到目前最先进性能水平。结果表明,该方法能够从人类演示视频中学习操作任务。此外,本框架成功应用于Baxter机器人。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号