首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
基于移进归约的句法分析系统具有线性的时间复杂度,因此在大规模句法分析任务中具有特别实际的意义。然而目前移进归约句法分析系统的性能远低于领域内最好的句法分析器,例如,伯克利句法分析器。该文研究如何利用向上学习和无标注数据改进移进归约句法分析系统,使之尽可能接近伯克利句法分析器的性能。我们首先应用伯克利句法分析器对大规模的无标注数据进行自动分析,然后利用得到的自动标注数据作为额外的训练数据改进词性标注系统和移进归约句法分析器。实验结果表明,向上学习方法和无标注数据使移进归约句法分析的性能提高了2.3%,达到82.4%。这个性能与伯克利句法分析器的性能可比。与此同时,该文最终得到的句法分析系统拥有明显的速度优势(7倍速度于伯克利句法分析器)。  相似文献   

2.
由于目前哈萨克语句法分析准确率较低并缺乏基于神经网络的哈萨克语句法分析的相关研究,针对哈萨克语短语结构的句法分析,使用基于移进—归约的方法,采用在栈中存储句子跨度而不是部分树结构,从而在进行句法树解析时不需要对句法树进行二叉化。该研究在句子特征提取时使用双向LSTM对句子跨度特征进行提取,得到句子跨度在整个句子上下文中信息,再使用多层感知机对句法分析模型进行训练,最后在解码时使用动态规划选取最优句法分析结果;最终使得哈萨克语短语句法分析准确率达到了76.92%。研究成果对哈萨克语句法分析准确率有了进一步的提高,并为后续的哈萨克语机器翻译及语义分析奠定良好的基础。  相似文献   

3.
为解决句法分析任务中的块边界识别和块内结构分析问题,该文基于概念复合块描述体系进行了块分析探索。通过概念复合块与以往的基本块和功能块描述体系的对比分析,深入挖掘了概念复合块自动分析的主要难点所在,提出了一种基于“移进-归约”模型的汉语概念复合块自动分析方法。在从清华句法树库TCT中自动提取的概念复合块标注库上,多层次、多角度对概念复合块自动分析性能进行了纵向与横向评估,初步实验结果证明了该分析方法对简单概念复合块分析的有效性,为后续进行更复杂的概念复合块的句法语义分析研究打下了很好的基础。  相似文献   

4.
为了进一步提高哈萨克语句法分析的准确率,为哈萨克语自然语言处理奠定良好基础,对基于转移的哈萨克语句法分析进行研究,采用改进后的基于转移的方法对句法树进行处理,即中序遍历句法树的方法将句法树转换为动作序列。使用神经网络构建句法分析器框架,分别使用三个长短期记忆网络(LSTM)表示堆栈信息、缓冲区信息以及动作历史信息对模型进行训练,根据所得到的概率预测动作序列,从而得到句法分析的结果。改进后的转移方法得到的句法分析准确率为74.37%。  相似文献   

5.
刘水  李生  赵铁军  刘鹏远 《软件学报》2012,23(5):1120-1131
为了提高基于短语的机器翻译系统的重排序能力,提出了一个基于源语言端的中心-修饰依存结构的重排序模型,并将该重排序模型以软约束的方式加入到机器翻译系统中.该排序模型提出了一种在机器翻译中应用句法树资源的方法,将句法树结构,通过将句法树映射成中心-修饰词的依存关系集合.该重排序模型在基于短语系统的默认参数设置下,显著地提升了系统的翻译质量.在系统原有的词汇化的重排序模型基础上,该重排序模型在翻译模型中融入了句法信息.实验结果显示,该模型可以明显地改善机器翻译系统的局部调序.  相似文献   

6.
该文提出一种基于汉语依存句法信息来构建维维吾尔语依存句法树库的方法。首先对维吾尔语进行形态分析,之后进行汉维词对齐、中文依存分析,然后根据词对齐信息以及汉语依存信息得到维吾尔语依存信息,最终对结果进行优化,获得维吾尔语依存句法库。在此基础上训练得到的依存句法分析器在CoNLL 2017 Shared Task 测试集上进行实验,带标记依存正确率LAS(Labeled Attachment Score)和无标记依存正确率UAS(Unlabeled Attachment Score)分别为34.38%和52.53%。  相似文献   

7.
为了提高基于短语的机器翻译系统的重排序能力,提出了一个基于源语言端的中心-修饰依存结构的重排序模型,并将该重排序模型以软约束的方式加入到机器翻译系统中。该排序模型提出了一种在机器翻译中应用句法树资源的方法,将句法树结构,通过将句法树映射成中心-修饰词的依存关系集合。该重排序模型在基于短语系统的默认参数设置下,显著地提升了系统的翻译质量。在系统原有的词汇化的重排序模型基础上,该重排序模型在翻译模型中融入了句法信息。实验结果显示,该模型可以明显地改善机器翻译系统的局部调序。  相似文献   

8.
针对现代藏语句法,在参照宾大中文树库的基础上,构建藏语短语句法树库,并建立了树库编辑工具,为藏汉机器翻译服务。在短语句法树库的基础上,提出一种融合藏语句法特征的藏汉机器翻译方法。实验分析结果表明,该方法可以很好地应用于藏汉机器翻译系统。  相似文献   

9.
层次句法分析是一种简单快速的完全句法分析方法,该方法将句法分析分解为词性标注、组块分析和构建句法树三个阶段。该文将其中的组块分析细分为基本块分析和复杂块分析,利用条件随机域模型代替最大熵模型进行序列化标注。由于层次句分析中错误累积问题尤为严重,该文提出了一种简单可行的错误预判及协同纠错算法,跟踪本层预判的错误标注结果进入下一层,利用两层预测分数相结合的方式协同纠错。实验结果表明,加入纠错方法后,层次句法分析在保证解析速度的同时,获得了与主流中文句法分析器相当的解析精度。  相似文献   

10.
实现一个基于机器学习的中文缺省项识别系统,对语料库进行预处理,选取多个特征及其组合,通过支持向量模型(SVM)构建的缺省识别模型进行中文缺省识别。研究系统在不同句法分析树上的性能。实验结果证明,该识别系统在标准的句法分析树上F值能达到84.01%,在自动句法树上能达到68.22%。  相似文献   

11.
Analyzing the syntactic structure of natural languages by parsing is an important task in artificial intelligence. Due to the complexity of natural languages, individual parsers tend to make different yet complementary errors. We propose a neural network based approach to combine parses from different parsers to yield a more accurate parse than individual ones. Unlike conventional approaches, our method directly transforms linearized candidate parses into the ground-truth parse. Experiments on the Penn English Treebank show that the proposed method improves over a state-of-the-art parser combination approach significantly.  相似文献   

12.
How to design a connectionist holistic parser   总被引:1,自引:0,他引:1  
Ho EK  Chan LW 《Neural computation》1999,11(8):1995-2016
Connectionist holistic parsing offers a viable and attractive alternative to traditional algorithmic parsers. With exposure to a limited subset of grammatical sentences and their corresponding parse trees only, a holistic parser is capable of learning inductively the grammatical regularity underlying the training examples that affects the parsing process. In the past, various connectionist parsers have been proposed. Each approach had its own unique characteristics, and yet some techniques were shared in common. In this article, various dimensions underlying the design of a holistic parser are explored, including the methods to encode sentences and parse trees, whether a sentence and its corresponding parse tree share the same representation, the use of confluent inference, and the inclusion of phrases in the training set. Different combinations of these design factors give rise to different holistic parsers. In succeeding discussions, we scrutinize these design techniques and compare the performances of a few parsers on language parsing, including the confluent preorder parser, the backpropagation parsing network, the XERIC parser of Berg (1992), the modular connectionist parser of Sharkey and Sharkey (1992), Reilly's (1992) model, and their derivatives. Experiments are performed to evaluate their generalization capability and robustness. The results reveal a number of issues essential for building an effective holistic parser.  相似文献   

13.
Subsymbolic systems have been successfully used to model several aspects of human language processing. Such parsers are appealing because they allow revising the interpretation as words are incrementally processed. Yet, it has been very hard to scale them up to realistic language due to training time, limited memory, and the difficulty of representing linguistic structure. In this study, we show that it is possible to keep track of long-distance dependencies and to parse into deeper structures than before based on two techniques: a localist encoding of the input sequence and a dynamic unrolling of the network according to the parse tree. With these techniques, the system can nonmonotonically parse a corpus of realistic sentences into parse trees labelled with grammatical tags from a broad-coverage Head-driven Phrase Structure Grammar of English.  相似文献   

14.
This paper describes a two-level error repair and recovery scheme applicable to table- driven parsers and scanners. For parsers, the first level of the scheme tries to locally correct erroneous text by performing insertions, deletions and replacements of tokens around the error detection point, and matching these source modifications against an ordered list of correction models. If this local repair of text fails, a global recovery is initiated, which skips the text up to a “key terminal” and pops the parse stack. Lexical error processing is based on similar principles. The main advantages of the scheme are its power, efficiency and language independence. It can be parameterized by the grammar writer, uses the normal analysis tables and does not slow down the analysis of correct portions of text. Furthermore it can be easily implemented in automatically generated analysers such as the ones constructed by our system SYNTAX.  相似文献   

15.
Ho EK  Chan LW 《Neural computation》2001,13(5):1137-1170
Holistic parsers offer a viable alternative to traditional algorithmic parsers. They have good generalization performance and are robust inherently. In a holistic parser, parsing is achieved by mapping the connectionist representation of the input sentence to the connectionist representation of the target parse tree directly. Little prior knowledge of the underlying parsing mechanism thus needs to be assumed. However, it also makes holistic parsing difficult to understand. In this article, an analysis is presented for studying the operations of the confluent preorder parser (CPP). In the analysis, the CPP is viewed as a dynamical system, and holistic parsing is perceived as a sequence of state transitions through its state-space. The seemingly one-shot parsing mechanism can thus be elucidated as a step-by-step inference process, with the intermediate parsing decisions being reflected by the states visited during parsing. The study serves two purposes. First, it improves our understanding of how grammatical errors are corrected by the CPP. The occurrence of an error in a sentence will cause the CPP to deviate from the normal track that is followed when the original sentence is parsed. But as the remaining terminals are read, the two trajectories will gradually converge until finally the correct parse tree is produced. Second, it reveals that having systematic parse tree representations alone cannot guarantee good generalization performance in holistic parsing. More important, they need to be distributed in certain useful locations of the representational space. Sentences with similar trailing terminals should have their corresponding parse tree representations mapped to nearby locations in the representational space. The study provides concrete evidence that encoding the linearized parse trees as obtained via preorder traversal can satisfy such a requirement.  相似文献   

16.
XML is acknowledged as the most effective format for data encoding and exchange over domains ranging from the World Wide Web to desktop applications. However, large-scale adoption into actual system implementations is being slowed down due to the inefficiency of its document-parsing methods. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have a key drawback—they must load the entire XML document in order to extract the overall document structure before document parsing can be performed. We have developed a framework for efficient parsing based on the idea of placing internal physical pointers within the XML document that allow the navigation process to skip large portions of the document during parsing. We show how to generate such internal pointers in a way that optimizes parsing using constructs supported by the current W3C XML standard. A double-lazy parser (2LP) exploits these internal pointers to efficiently parse the document. The usage of supported W3C constructs to create internal pointers allows 2LP to be backward compatible—i.e., the pointer-augmented documents can be parsed by current XML parsers. We also implemented a mechanism to efficiently parse large documents with limited main memory, thereby overcoming a major limitation in current solutions. We study our pointer generation and parsing algorithms both theoretically and experimentally, and show that they perform considerably better than existing approaches.  相似文献   

17.
This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical Machine Translation output and human translations. Our approach uses discriminative language modelling to rerank the n-best translations generated by a statistical machine translation system. The performance is evaluated for Arabic-to-English translation using NIST’s MT-Eval benchmarks. While deep features extracted from parse trees do not consistently help, we show how features extracted from a shallow Part-of-Speech annotation layer outperform a competitive baseline and a state-of-the-art comparative reranking approach, leading to significant BLEU improvements on three different test sets.  相似文献   

18.
E. Klein  M. Martin 《Software》1989,19(11):1015-1028
PGS is a parser generating system accepting LALR(1) and related grammars in extended BNF notation and producing parsers based on table-driven stack automata. To enable syntax-directed translation, semantic actions can be attached to rules of the input grammar. An attribution mechanism allows the transfer of information between rules. The generated parsers have an automatic error recovery which can be tailored to satisfy specific needs of the language to be accepted. PGS generates parsers written in Pascal, Modula-2, C or Ada. Compared with existing systems, e.g. YACC1, a parser generated by PGS is twice as fast and the parse tables require 25 per cent less storage. This paper gives a survey of algorithms involved in the generator and the generated parsers, and compares them with algorithms used in other systems. In detail, it compares several parse-table representations and their implications for space and time efficiency of the generated parsers.  相似文献   

19.
Thomas R. Kramer 《Software》2010,40(5):387-404
A method is described for repairing some shift/reduce conflicts caused by limited lookahead in LALR(1) parsers such as those built by bison. Also, six types of Extended BNF (EBNF) construct are identified that cause a shift/reduce conflict when a Yet Another Compiler Compiler (YACC) file translated directly from EBNF is processed by bison. For each type, a replacement EBNF construct is described representing the same grammar and causing no shift/reduce conflict when its YACC equivalent is processed by bison. Algorithms are given for identifying instances of each type and transforming them into their replacements. The algorithms are implemented in an automatic parser builder that builds parsers for subsets of the DMIS language. The parser builder reads an EBNF file and writes C++ classes, a YACC file, and a Lex file, which are then processed to build a parser. The parsers build a parse tree using the C++ classes. The EBNF for DMIS is written in natural terms so that natural C++ classes are generated. However, if translated directly into YACC, the natural EBNF leads to 22 shift/reduce conflicts that fall into the six types. The parser builder recognizes the six constructs and replaces them automatically before generating YACC. The YACC that is generated parses in terms of unnatural constructs while building a parse tree using natural C++ classes. The six types of construct may occur in any statement‐based language that uses a minor separator, such as a comma; hence knowing how to recognize and replace them may be broadly useful. Published in 2010 by John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号