Affiliation: | 1.College of Computer Science, National University of Defense Technology, Changsha, China ;2.National Laboratory For Parallel and Distributed Processing, National University of Defense Technology, Changsha, China ; |
Abstract: | Text generation from abstract meaning representation is a fundamental task in natural language generation. An interesting challenge is that distant context could influence the surface realization for each node. In the previous encoder-decoder based approaches, graph neural networks have been commonly used to encode abstract meaning representation graphs and exhibited superior performance over the sequence and tree encoders. However, most of them cannot stack numerous layers, thus being too shallow to capture distant context. In this paper, we propose solutions from three aspects. Firstly, we introduce a Transformer based graph encoder to embed abstract meaning representation graphs. This encoder can stack more layers to encode larger context, while without performance degrading. Secondly, we expand the receptive field of each node, i.e. building direct connections between node pairs, to capture the information of its distant neighbors. We also exploit relative position embedding to make the model aware of the original hierarchy of graphs. Thirdly, we encode the linearized version of abstract meaning representation with the pre-trained language model to get the sequence encoding and incorporate it into graph encoding to enrich features. We conduct experiments on LDC2015E86 and LDC2017T10. Experimental results demonstrate that our method outperforms previous strong baselines. Especially, we investigate the performance of our model on large graphs, finding a larger performance gain. Our best model achieves 31.99 of BLEU and 37.02 of METEOR on LDC2015E86, 34.21 of BLEU, and 39.26 of METEOR on LDC2017T10, which are new states of the art. |