首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
该文提出了一种面向移进—归约句法分析器的单模型系统整合算法。在训练阶段,该方法通过调整训练数据的分布,来构建用于整合的多个移进—归约句法分析器。在解码阶段,该方法首先使用各个移进—归约句法分析器对待分析的句子进行句法分析,然后利用一个线性模型对各句法分析器输出的句法树进行评分,从中选出得分最高的句法树作为最终结果。该文中的实验是在宾州英文树库上进行的。实验结果表明,该文中的方法能够显著改善基准系统的性能。  相似文献   

2.
基于浅层句法分析的中文语义角色标注研究   总被引:2,自引:1,他引:1  
语义角色标注是获取语义信息的一种重要手段。许多现有的语义角色标注都是在完全句法分析的基础上进行的,但由于现阶段中文完全句法分析器性能比较低,基于自动完全句法分析的中文语义角色标注效果并不理想。因此该文将中文语义角色标注建立在了浅层句法分析的基础上。在句法分析阶段,利用构词法获得词语的“伪中心语素”特征,有效缓解了词语级别的数据稀疏问题,从而提高了句法分析的性能,F值达到了0.93。在角色标注阶段,利用构词法获得了目标动词的语素特征,细粒度地描述了动词本身的结构,从而为角色标注提供了更多的信息。此外,该文还提出了句子的“粗框架”特征,有效模拟了基于完全句法分析的角色标注中的子类框架信息。该文所实现的角色标注系统的F值达到了0.74,比前人的工作(0.71)有较为显著的提升,从而证明了该文的方法是有效的。  相似文献   

3.
汉语短语的自动划分和标注   总被引:13,自引:2,他引:13  
考虑到传统的基于规则的汉语分析器对大规模真实文本的分析所遇到的困难, 本文在使用统计方法进行汉语自动句法分析方面作了一些探索, 提出了一套基于统计的汉语短语自动划分和标注算法, 它分为预测划分点、括号匹配和分析树生成等三个处理阶段, 其间利用了从人工标注的树库中统计得到的各种数据进行自动句法排歧, 最终得到一棵最佳句法分析树, 从而可以自顶向下地完成对一句句子的短语自动划分和标注, 对一千多句句子的封闭测试结果表明, 短语划分的正确率约为86%, 短语标注的正确率约为92%, 处理效果还是比较令人满意的。  相似文献   

4.
目前依存句法分析仍主要采用有指导的机器学习方法,即需要大规模高质量的树库作为训练语料,而现阶段中文依存树库资源相对较少,树库标注又是一件费时费力的工作。面对大量未标注语料,该文将主动学习应用到中文依存句法分析,优先选择句法模型预测不准的实例交由人工标注。该文提出并比较了多种衡量依存句法模型预测可信度的准则。实验表明,一方面,与随机选择标注实例相比,当使用相同数目训练实例时,主动学习使中文依存分析性能最高提升0.8%;另一方面,主动学习使依存分析达到相同准确率时只需标注更少量实例,人工标注量最多可减少30%。  相似文献   

5.
句法分析在自然语言信息处理中处于非常关键的位置。该文在描述蒙古语特点的同时提出蒙古语句子中短语结构分析难点。根据蒙古语自身特点,归纳了短语标注体系,建立了蒙古语短语树库,尝试实现蒙古语句子的自动分析。初次开发的句法分析器的分析准确率达到62%,自动分析器的测试结果表明该分析器能在较大程度上辨别出短语结构类型,能生成句法树结构,但在短语结构内部关系方面的识别效果还有很大改进空间。最后总结了分析器近期能解决的相关问题。  相似文献   

6.
中文词法分析与句法分析融合策略研究   总被引:4,自引:2,他引:2  
利用外部资源是提升句法分析性能的一种有效方法。本文利用中文词法分析器这一外部资源,提出了一种通用转换方法将中文词法分析器与句法分析器有机地融合在一起。通过基于转换的错误驱动学习和条件随机场解决不同切词、词性标注标准间的转换问题。在句法分析方面,本文提出了多子模型句法分析器,将中心词驱动模型和结构上下文模型有效结合在一起。融合后的中文句法分析性能在宾州中文树库1.0版①测试集上F1值达到了82.5%的最好水平。  相似文献   

7.
句法分析前沿动态综述   总被引:3,自引:2,他引:1  
句法分析的目标是分析输入句子并得到其句法结构,是自然语言处理领域的经典任务之一。目前针对该任务的研究主要集中于如何通过从数据中自动学习来提升句法分析器的精度。该文对句法分析方向的前沿动态进行了调研,分别从有监督句法分析、无监督句法分析和跨领域跨语言句法分析三个子方向梳理和介绍了2018—2019年发表的新方法和新发现,并对句法分析子方向的研究前景进行了分析和展望。  相似文献   

8.
层次句法分析是一种简单快速的完全句法分析方法,该方法将句法分析分解为词性标注、组块分析和构建句法树三个阶段。该文将其中的组块分析细分为基本块分析和复杂块分析,利用条件随机域模型代替最大熵模型进行序列化标注。由于层次句分析中错误累积问题尤为严重,该文提出了一种简单可行的错误预判及协同纠错算法,跟踪本层预判的错误标注结果进入下一层,利用两层预测分数相结合的方式协同纠错。实验结果表明,加入纠错方法后,层次句法分析在保证解析速度的同时,获得了与主流中文句法分析器相当的解析精度。  相似文献   

9.
为解决句法分析任务中的块边界识别和块内结构分析问题,该文基于概念复合块描述体系进行了块分析探索。通过概念复合块与以往的基本块和功能块描述体系的对比分析,深入挖掘了概念复合块自动分析的主要难点所在,提出了一种基于“移进-归约”模型的汉语概念复合块自动分析方法。在从清华句法树库TCT中自动提取的概念复合块标注库上,多层次、多角度对概念复合块自动分析性能进行了纵向与横向评估,初步实验结果证明了该分析方法对简单概念复合块分析的有效性,为后续进行更复杂的概念复合块的句法语义分析研究打下了很好的基础。  相似文献   

10.
汉语多重关系复句的关系层次分析   总被引:8,自引:0,他引:8  
鲁松  白硕  李素建  刘群 《软件学报》2001,12(7):987-995
汉语多重关系复句的句法分析问题主要由复句中的关系分析和层次分析两部分组成.将多重关系复句中的层次分析作为研究对象.它是针对多种逻辑或并列关系,按照一定层次组成复杂主从关系复句而进行的关系层次分析过程.为了有效地形式化地表示多重关系复句的层次结构,提出了关系层次树的概念,并以此为基础构造文法,采用部分数据驱动的确定性移进-归约算法实现多重关系复句的关系层次分析.通过开放测试对计算机实现的多重关系复句句法分析器进行考察,93.56%的正确率使所提出的分析方法的有效性和正确性得到了充分的验证.  相似文献   

11.
基于动作建模的中文依存句法分析   总被引:1,自引:0,他引:1  
决策式依存句法分析,也就是基于分析动作的句法分析方法,常常被认为是一种高效的分析算法,但是它的性能稍低于一些更复杂的句法分析模型。本文将决策式句法分析同产生式、判别式句法分析这些复杂模型做了比较,试验数据采用宾州中文树库。结果显示,对于中文依存句法分析,决策式句法分析在性能上好于产生式和判别式句法分析。更进一步,我们观察到决策式句法分析是一种贪婪的算法,它在每个分析步骤只挑选最有可能的分析动作而丢失了对整句话依存分析的全局视角。基于此,我们提出了两种模型用来对句法分析动作进行建模以避免原决策式依存分析方法的贪婪性。试验结果显示,基于动作建模的依存分析模型在性能上好于原决策式依存分析方法,同时保持了较低的时间复杂度。  相似文献   

12.
In this paper we develop novel algorithmic ideas for building a natural language parser grounded upon the hypothesis of incrementality. Although widely accepted and experimentally supported under a cognitive perspective as a model of the human parser, the incrementality assumption has never been exploited for building automatic parsers of unconstrained real texts. The essentials of the hypothesis are that words are processed in a left-to-right fashion, and the syntactic structure is kept totally connected at each step.Our proposal relies on a machine learning technique for predicting the correctness of partial syntactic structures that are built during the parsing process. A recursive neural network architecture is employed for computing predictions after a training phase on examples drawn from a corpus of parsed sentences, the Penn Treebank. Our results indicate the viability of the approach and lay out the premises for a novel generation of algorithms for natural language processing which more closely model human parsing. These algorithms may prove very useful in the development of efficient parsers.  相似文献   

13.
Dependency parsers, which are widely used in natural language processing tasks, employ a representation of syntax in which the structure of sentences is expressed in the form of directed links (dependencies) between their words. In this article, we introduce a new approach to transition‐based dependency parsing in which the parsing algorithm does not directly construct dependencies, but rather undirected links, which are then assigned a direction in a postprocessing step. We show that this alleviates error propagation, because undirected parsers do not need to observe the single‐head constraint, resulting in better accuracy. Undirected parsers can be obtained by transforming existing directed transition‐based parsers as long as they satisfy certain conditions. We apply this approach to obtain undirected variants of three different parsers (the Planar, 2‐Planar, and Covington algorithms) and perform experiments on several data sets from the CoNLL‐X shared tasks and on the Wall Street Journal portion of the Penn Treebank, showing that our approach is successful in reducing error propagation and produces improvements in parsing accuracy in most of the cases and achieving results competitive with state‐of‐the‐art transition‐based parsers.  相似文献   

14.
H. Mssenbck 《Software》1988,18(7):691-700
We present a simple method for connecting semantic actions to parsers. Although applicable to any kind of parser it is especially suited for LR parsers. The method is based on the idea of separating syntax analysis and semantic processing and executing semantic actions by procedures, similar to those of a recursive descent compiler. The procedures are driven by structural information about the source program, which is collected during parsing. The method is applicable to L-attributed grammars. It can be incorporated easily into any existing parser.  相似文献   

15.
赵亚琴  周献中 《计算机应用》2005,25(6):1339-1341,1344
提出并实现了一种基于神经网络的GLR(Generalized LR)句法分析算法,该算法结合神经网络自学习、自组织和并行分布处理等优点,以BP神经网络结构模型取代了GLR算法的分析表,模拟其移进和归约动作,通过计算网络输出来分析句法结构。该分析算法较好地解决了GLR算法对于存在多个移进归约冲突动作时,复制分析栈会使得动作表变得很大的缺点,实验结果表明,这种算法具有较好的泛化能力。  相似文献   

16.
Learning to Parse Natural Language with Maximum Entropy Models   总被引:6,自引:1,他引:5  
Ratnaparkhi  Adwait 《Machine Learning》1999,34(1-3):151-175
This paper presents a machine learning system for parsing natural language that learns from manually parsed example sentences, and parses unseen data at state-of-the-art accuracies. Its machine learning technology, based on the maximum entropy framework, is highly reusable and not specific to the parsing problem, while the linguistic hints that it uses to learn can be specified concisely. It therefore requires a minimal amount of human effort and linguistic knowledge for its construction. In practice, the running time of the parser on a test sentence is linear with respect to the sentence length. We also demonstrate that the parser can train from other domains without modification to the modeling framework or the linguistic hints it uses to learn. Furthermore, this paper shows that research into rescoring the top 20 parses returned by the parser might yield accuracies dramatically higher than the state-of-the-art.  相似文献   

17.
Analyzing the syntactic structure of natural languages by parsing is an important task in artificial intelligence. Due to the complexity of natural languages, individual parsers tend to make different yet complementary errors. We propose a neural network based approach to combine parses from different parsers to yield a more accurate parse than individual ones. Unlike conventional approaches, our method directly transforms linearized candidate parses into the ground-truth parse. Experiments on the Penn English Treebank show that the proposed method improves over a state-of-the-art parser combination approach significantly.  相似文献   

18.
This article describes how a treebank of ungrammatical sentences can be created from a treebank of well-formed sentences. The treebank creation procedure involves the automatic introduction of frequently occurring grammatical errors into the sentences in an existing treebank, and the minimal transformation of the original analyses in the treebank so that they describe the newly created ill-formed sentences. Such a treebank can be used to test how well a parser is able to ignore grammatical errors in texts (as people do), and can be used to induce a grammar capable of analysing such sentences. This article demonstrates these two applications using the Penn Treebank. In a robustness evaluation experiment, two state-of-the-art statistical parsers are evaluated on an ungrammatical version of Sect. 23 of the Wall Street Journal (WSJ) portion of the Penn treebank. This experiment shows that the performance of both parsers degrades with grammatical noise. A breakdown by error type is provided for both parsers. A second experiment retrains both parsers using an ungrammatical version of WSJ Sections 2–21. This experiment indicates that an ungrammatical treebank is a useful resource in improving parser robustness to grammatical errors, but that the correct combination of grammatical and ungrammatical training data has yet to be determined.  相似文献   

19.
Haruno  Masahiko  Shirai  Satoshi  Ooyama  Yoshifumi 《Machine Learning》1999,34(1-3):131-149
This paper describes a novel and practical Japanese parser that uses decision trees. First, we construct a single decision tree to estimate modification probabilities; how one phrase tends to modify another. Next, we introduce a boosting algorithm in which several decision trees are constructed and then combined for probability estimation. The constructed parsers are evaluated using the EDR Japanese annotated corpus. The single-tree method significantly outperforms the conventional Japanese stochastic methods. Moreover, the boosted version of the parser is shown to have great advantages; (1) a better parsing accuracy than its single-tree counterpart for any amount of training data and (2) no over-fitting to data for various iterations. The presented parser, the first non-English stochastic parser with practical performance, should tighten the coupling between natural language processing and machine learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号