首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
讽刺是日常交际中一种常见的语用现象,能够丰富说话者的观点并间接地表达说话者的深层含义。讽刺检测任务的研究目标是挖掘目标语句的讽刺倾向。针对讽刺语境表达变化多样以及不同用户、不同主题下的讽刺含义各不相同等特征,构建融合用户嵌入与论坛主题嵌入的上下文语境讽刺检测模型。该模型借助ParagraphVector方法的序列学习能力对用户评论文档与论坛主题文档进行编码,从而获取目标分类句的用户讽刺特征与主题特征,并利用一个双向门控循环单元神经网络得到目标句的语句编码。在标准讽刺检测数据集上进行的实验结果表明,与传统Bag-of-Words、CNN等模型相比,该模型能够有效提取语句的上下文语境信息,具有较高的讽刺检测分类准确率。  相似文献   

2.
Every pregroup grammar is shown to be strongly equivalent to one which uses basic types and left and right adjoints of basic types only. Therefore, a semantical interpretation is independent of the order of the associated logic. Lexical entries are read as expressions in a two sorted predicate logic with ∈ and functional symbols. The parsing of a sentence defines a substitution that combines the expressions associated to the individual words. The resulting variable free formula is the translation of the sentence. It can be computed in time proportional to the parsing structure. Non-logical axioms are associated to certain words (relative pronouns, indefinite article, comparative determiners). Sample sentences are used to derive the characterizing formula of the DRS corresponding to the translation.  相似文献   

3.
句子语义分析是语言研究深入发展的客观要求,也是当前制约语言信息处理技术深度应用的主要因素。在探索深层语义分析方法的基础上,该文根据汉语的特点,提出了一整套语义依存图的构建方法,并建立了一个包含30 000个句子的语义依存图库。以兼语句为重点研究对象,该文研究了语料库中所有纯粹的兼语句所对应的句模情况,进而试图构建基于语义依存图的句模系统,总结句型和句模的映射规则,从而为更好的建立语义自动分析模型提供相应的知识库。
  相似文献   

4.
基于主题情感句的汉语评论文倾向性分析*   总被引:1,自引:1,他引:0  
提出一种基于主题情感句的汉语评论文倾向性分析方法.根据评论文的特点,采用一种基于n元词语匹配的方法识别主题,通过对比与主题的语义相似度和进行主客观分类抽取出候选主题情感句,计算其中相似度最高的若干个句子的情感倾向,将其平均值作为评论文的整体倾向.基于主题情感句的评论文倾向性分析方法避免了进行篇章结构分析,排除了与主题无...  相似文献   

5.
该文将汉语母语者的160份复述文本与其原文进行以小句为单位的逐句比对,发现其中出现了6 484对复述句对。从其生成的方式来看,可以分为改换词语和重铸整句两大类。以语用学原理对这些复述句进行分析,发现与以往研究的复述现象不同的是: 句对间往往不具有相同的逻辑语义真值,但在特定语境下却能传达同一个语用意义,具有等效的语用功能。这说明在自然语言处理中,识别进入真实交际中的复述句不仅依赖语法、语义知识库,还需要借助含有语用知识和语境信息的知识库。  相似文献   

6.
汉语评论文的特点使得可以利用情感主题句表示其浅层篇章结构,该文由此提出一种基于浅层篇章结构的评论文倾向性分析方法。该方法采用基于n元词语匹配的方法识别主题,通过对比与主题的语义相似度大小和进行主客观分类抽取出候选主题情感句,计算其中相似度最高的若干个句子的倾向性,将其平均值作为评论文的整体倾向性。基于浅层篇章结构的评论文倾向性分析方法避免了进行完全篇章结构分析,排除了与主题无关的主观性信息,实验结果表明,该方法准确率较高,切实可行。  相似文献   

7.
We describe a novel approach that allows humanoid robots to incrementally integrate motion primitives and language expressions, when there are underlying natural language and motion language modules. The natural language module represents sentence structure using word bigrams. The motion language module extracts the relations between motion primitives and the relevant words. Both the natural language module and the motion language module are expressed as probabilistic models and, therefore, they can be integrated so that the robots can both interpret observed motion in the form of sentences and generate the motion corresponding to a sentence command. Incremental learning is needed for a robot that develops these linguistic skills autonomously . The algorithm is derived from optimization of the natural language and motion language modules under constraints on their probabilistic variables such that the association between motion primitive and sentence in incrementally added training pairs is strengthened. A test based on interpreting observed motion in the forms of sentence demonstrates the validity of the incremental statistical learning algorithm.  相似文献   

8.
In text summarization, relevance and coverage are two main criteria that decide the quality of a summary. In this paper, we propose a new multi-document summarization approach SumCR via sentence extraction. A novel feature called Exemplar is introduced to help to simultaneously deal with these two concerns during sentence ranking. Unlike conventional ways where the relevance value of each sentence is calculated based on the whole collection of sentences, the Exemplar value of each sentence in SumCR is obtained within a subset of similar sentences. A fuzzy medoid-based clustering approach is used to produce sentence clusters or subsets where each of them corresponds to a subtopic of the related topic. Such kind of subtopic-based feature captures the relevance of each sentence within different subtopics and thus enhances the chance of SumCR to produce a summary with a wider coverage and less redundancy. Another feature we incorporate in SumCR is Position, i.e., the position of each sentence appeared in the corresponding document. The final score of each sentence is a combination of the subtopic-level feature Exemplar and the document-level feature Position. Experimental studies on DUC benchmark data show the good performance of SumCR and its potential in summarization tasks.  相似文献   

9.
面向EBMT的汉语单句谓语中心词识别研究   总被引:9,自引:3,他引:9  
在基于实例的汉英机器翻译( EBMT) 系统中,为计算语句相似度,需要对句子进行适当的分析。本文首先提出了一种折中的汉语句子分析方法———骨架依存分析法,通过确定谓语中心词来把握句子的整体结构,然后,提出了一种根据汉英例句集中英语例句的谓语中心词来识别相应的汉语例句的谓语中心词的策略。实验结果是令人满意的。  相似文献   

10.
英汉平行文本句子对齐可以视为一个二分图顶点配对模型。利用完全基于英汉词典的双语句子相关性评价函数,能够对二分图的“顶点对”进行加权。该文提出的顶点配对句子对齐方法首先获取二分图全局最大权重顶点配对作为临时锚点;在此基础上,根据句子先后顺序,局部最大权重顶点配对和英汉句长比的值域范围,纠正临时锚点中的错误,补充锚点序列未覆盖的合法顶点对,同时划分句对,实现句子对齐处理。在对比实验中该句子对齐方法优于Champollion句子对齐系统。从实验对比结果和实践效果看,该句子对齐方法可行。
  相似文献   

11.
针对汉语语句表意灵活复杂多变的特点,提出一种基于语义与情感的句子相似度计算方法,从表意层面计算句子相似度。该方法使用哈工大LTP平台对句子进行预处理,提取词语、词性、句法依存标记与语义角色标记,将语义角色标注结果作为句中语义独立成分赋予相似度权重系数,综合句法依存关系与词法关系计算两句相同标签语义独立成分相似度得到部分相似度,加权计算部分相似度得到句子整体相似度。另外,考虑到情感与句式因子,在整体相似度的基础上对满足条件的两句计算情感减益与句式减益。实验结果表明,该方法能有效提取出句子语义独立成分,从语义层面上计算句子相似度,解决了信息遗漏与句子组成成分不一致的问题,提高了句子相似度计算的准确率与鲁棒性。  相似文献   

12.
Different types of sentences express sentiment in very different ways. Traditional sentence-level sentiment classification research focuses on one-technique-fits-all solution or only centers on one special type of sentences. In this paper, we propose a divide-and-conquer approach which first classifies sentences into different types, then performs sentiment analysis separately on sentences from each type. Specifically, we find that sentences tend to be more complex if they contain more sentiment targets. Thus, we propose to first apply a neural network based sequence model to classify opinionated sentences into three types according to the number of targets appeared in a sentence. Each group of sentences is then fed into a one-dimensional convolutional neural network separately for sentiment classification. Our approach has been evaluated on four sentiment classification datasets and compared with a wide range of baselines. Experimental results show that: (1) sentence type classification can improve the performance of sentence-level sentiment analysis; (2) the proposed approach achieves state-of-the-art results on several benchmarking datasets.  相似文献   

13.
This paper presents the results of developing and evaluating an automatic approach that identifies causality boundaries from causality expressions. This approach focuses on explicitly expressed causalities extracted from Root Cause Analysis (RCA) reports in engineering domains. Causality expressions contain Cause and Effect pairs and multiple expressions can occur in a single sentence. Causality boundaries are semantically annotated text fragments explicitly indicating which parts of a fragment denote Causes and corresponding Effects. To identify these, linguistic analysis using natural language processing (NLP) is required. Current off-the-shelf NLP tools are mostly developed based on the language models of general-purpose texts, e.g. newspapers. The lack of portability of these tools to engineering domains has been identified as a barrier to achieving comparable analysis accuracy in new domains. One of the reasons for this is the rare and unpredictable behaviours of certain words in closed domains. Ill-formed sentences, abbreviations and capitalization of common words also contribute to the difficulty. The proposed approach addresses this problem by using a probability-based method that learns the probability distribution of the boundaries not only from the NLP analysis but also from the local contexts that exploit language conventions occurred in the RCA reports. Using a collection of RCA reports obtained from an aerospace company, a test showed that the proposed approach achieved 86% accuracy outperforming a baseline approach that relied only on the NLP analysis.  相似文献   

14.
基于变换的汉语句法功能标注探讨   总被引:4,自引:1,他引:4  
本文尝试利用基于变换的方法标注中文句子词汇的句法功能。系统输入已分词并标注了词性的句子, 输出每个词的依存关系。我们首先设计了一个由44种依存关系组成的汉语依存体系, 然后以人-机互助的方式标注了1300句中文句子。其中1100句作为训练文本用来获取标注规则, 余下200句用做测试。设计了17类变换模板, 采用基于变换的算法获取了60条有序的依存关系标注规则。在测试时, 对新词标注以该词词性所对应的最高频的依存关系作为初始标注以提高鲁棒性。实验表明这种方法简单可行, 取得了初步满意的效果。  相似文献   

15.
王宇  王芳 《计算机应用研究》2020,37(6):1769-1773
社区问答系统中充斥着大量的噪声,给用户检索信息造成麻烦,以往的问句检索模型大多集中在词语层面。针对以上问题构建句子层面的问句检索模型。新模型基于概念层次网络(hierarchincal network of concept,HNC)理论当中的句类知识,从句子的语用、语法和语义三个层面计算问句间相似度。通过问句分类算法确定查询问句和候选问句的问句类别,得到问句间的语用相似度,利用句类表达式的结构和语义块组成分别计算问句间的语法及语义相似度。在真实数据集上的实验表明,基于HNC句类的新模型提高了问句检索结果的准确性。  相似文献   

16.
针对面向微博的中文新闻摘要的主要挑战,提出了一种将矩阵分解与子模最大化相结合的新闻自动摘要方法。该方法首先利用正交矩阵分解模型得到新闻文本潜语义向量,解决了短文本信息稀疏问题,并使投影方向近似正交以减少冗余;然后从相关性和多样性等方面评估新闻语句集合,该评估函数由多个单调子模函数和一个评估语句不相似度的非子模函数组成;最后设计贪心算法生成最终摘要。在NLPCC2015数据集面向上的实验结果表明本文提出的方法能有效提高面向微博的新闻自动摘要质量,ROUGE得分超过其他基线系统。  相似文献   

17.
基于语句聚类识别的知识动态提取方法研究   总被引:6,自引:0,他引:6  
苏牧  肖人彬 《计算机学报》2001,24(5):487-495
根据自然语言的群集现象和对知识体系动态更新的要求,该文提出了一种基于语句聚类识别的知识动态提取方法。文中首先给出了知识动态提取方法的研究框架,该框架描述了由自然语言文卷到面向对象知识体系的转换过程。研究了语句矢量化的有关问题,给出了若干基本定义和一个判定定理,讨论了句元属性矢量的后置处理。提出了基于神经网络的语句聚类识别方法,采用前信度概念作为语句识别结果可信性的一种度量,利用Matlab编写了一个ART2神经网络仿真程序,给出了该神经网络对语句识别效果且作了相应分析。根据ART2网络对语句进行识别的结果,需将聚类后的各个语句进行知识形式的转换,为此提出了中间代码生成的宽度优先方法,并定义后信度作为对语句识别及语义模型构造可信性 的一个最终评价指标;进而针对合取规则句型,具体介绍了该方法的实现步骤。最后运用结构建模新方法生成结构化的派生关系,从而完成了由自然语言文卷到面向对象知识体系的知识提取过程。作者将一个机械CAD为背景的应用实例贯穿全文,演示了该实例的具体实现,证实了所提方法的有效性。  相似文献   

18.
Computation on Sentence Semantic Distance for Novelty Detection   总被引:1,自引:0,他引:1       下载免费PDF全文
Novelty detection is to retrieve new information and filter redundancy from given sentences that are relevant to a specific topic. In TREC2003, the authors tried an approach to novelty detection with semantic distance computation. The motivation is to expand a sentence by introducing semantic information. Computation on semantic distance between sentences incorporates WordNet with statistical information. The novelty detection is treated as a binary classification problem: new sentence or not. The feature vector, used in the vector space model for classification, consists of various factors, including the semantic distance from the sentence to the topic and the distance from the sentence to the previous relevant context occurring before it. New sentences are then detected with Winnow and support vector machine classifiers, respectively. Several experiments are conducted to survey the relationship between different factors and performance. It is proved that semantic computation is promising in novelty detection. The ratio of new sentence size to relevant size is further studied given different relevant document sizes. It is found that the ratio reduced with a certain speed (about 0.86). Then another group of experiments is performed supervised with the ratio. It is demonstrated that the ratio is helpful to improve the novelty detection performance.  相似文献   

19.
藏语是语序非常灵活的一种语言,藏语词法分析和句法分析等浅层研究不能很好地满足藏语自然语言理解的需求。从简单句型的藏语句子出发,研究了基于投射的藏语语义依存分析,构建了藏语语义依存树库,设计了语义依存弧类型分析特征模板。最后通过最大熵分类模型,对人工分析过的语义依存弧的句子进行依存弧的类型分析并进行标注,为今后的语义依存分析提供新的思考视角和更好的理论支撑。  相似文献   

20.
复句是自然语言的基本单位之一,复句的判定及其语义关系的识别,对于句法解析、篇章理解等都有着非常重要的作用。基于神经网络模型识别自然语料中的复句,判断其复句关系,构造复句判定和复句关系识别联合模型,以最大程度地减少误差传递。在复句判定任务中通过Bi-LSTM获得上下文语义信息,采用注意力机制捕获句内跨距离搭配信息,利用CNN捕获句子局部信息。在复句关系识别任务中,使用Bert增强句子的语义表示,运用Tree-LSTM对句法结构和成分标记进行建模。在CAMR中文语料上的实验结果表明,基于注意力机制的复句判定模型F1值达到91.7%,基于Tree-LSTM的复句关系识别模型F1值达到69.15%。在联合模型中,2项任务的F1值分别达到92.15%和66.25%,说明联合学习能够使不同任务获得更多特征,从而提高模型性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号