首页 | 本学科首页   官方微博 | 高级检索  
     


Arabic Named Entity Recognition: A BERT-BGRU Approach
Authors:Norah Alsaaran  Maha Alrabiah
Affiliation:1.Robotics and Machine Intelligence Engineering, School of Mechanical and Manufacturing Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan2 National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan3 Department of Computer Science, HITEC University Taxila, Taxila, Pakistan4 Department of Computer Science and Engineering, Soonchunhyang University, Asan, Korea
Abstract:Named Entity Recognition (NER) is one of the fundamental tasks in Natural Language Processing (NLP), which aims to locate, extract, and classify named entities into a predefined category such as person, organization and location. Most of the earlier research for identifying named entities relied on using handcrafted features and very large knowledge resources, which is time consuming and not adequate for resource-scarce languages such as Arabic. Recently, deep learning achieved state-of-the-art performance on many NLP tasks including NER without requiring hand-crafted features. In addition, transfer learning has also proven its efficiency in several NLP tasks by exploiting pretrained language models that are used to transfer knowledge learned from large-scale datasets to domain-specific tasks. Bidirectional Encoder Representation from Transformer (BERT) is a contextual language model that generates the semantic vectors dynamically according to the context of the words. BERT architecture relay on multi-head attention that allows it to capture global dependencies between words. In this paper, we propose a deep learning-based model by fine-tuning BERT model to recognize and classify Arabic named entities. The pre-trained BERT context embeddings were used as input features to a Bidirectional Gated Recurrent Unit (BGRU) and were fine-tuned using two annotated Arabic Named Entity Recognition (ANER) datasets. Experimental results demonstrate that the proposed model outperformed state-of-the-art ANER models achieving 92.28% and 90.68% F-measure values on the ANERCorp dataset and the merged ANERCorp and AQMAR dataset, respectively.
Keywords:Named entity recognition  Arabic  deep learning  BGRU  BERT
点击此处可从《计算机、材料和连续体(英文)》浏览原始摘要信息
点击此处可从《计算机、材料和连续体(英文)》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号