首页 | 本学科首页   官方微博 | 高级检索  
     


Unsupervised statistical text simplification using pre-trained language modeling for initialization
Authors:Jipeng QIANG  Feng ZHANG  Yun LI  Yunhao YUAN  Yi ZHU  Xindong WU
Affiliation:1. Department of Computer Science, Yangzhou University, Yangzhou 225127, China2. Key Laboratory of Knowledge Engineering with Big Data (Hefei University of Technology), Ministry of Education, Hefei 23009, China3. Mininglamp Academy of Sciences, Mininglamp, Beijing 100089, China
Abstract:Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines.
Keywords:text simplification  pre-trained language modeling  BERT  word embeddings  
点击此处可从《Frontiers of Computer Science》浏览原始摘要信息
点击此处可从《Frontiers of Computer Science》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号