TY - JOUR ID - 2455 TI - A Transformer-based Approach for Persian Text Chunking JO - Journal of AI and Data Mining JA - JADM LA - en SN - 2322-5211 AU - Kavehzadeh, P. AU - Abdollah Pour, M. M. AU - Momtazi, S. AD - Computer Engineering Department, Amirkabir University of Technology, Tehran, Iran. Y1 - 2022 PY - 2022 VL - 10 IS - 3 SP - 373 EP - 383 KW - Persian text chunking KW - sequence labeling KW - deep learning KW - contextualized word representation DO - 10.22044/jadm.2022.11035.2250 N2 - Over the last few years, text chunking has taken a significant part in sequence labeling tasks. Although a large variety of methods have been proposed for shallow parsing in English, most proposed approaches for text chunking in Persian language are based on simple and traditional concepts. In this paper, we propose using the state-of-the-art transformer-based contextualized models, namely BERT and XLM-RoBERTa, as the major structure of our models. Conditional Random Field (CRF), the combination of Bidirectional Long Short-Term Memory (BiLSTM) and CRF, and a simple dense layer are employed after the transformer-based models to enhance the model's performance in predicting chunk labels. Moreover, we provide a new dataset for noun phrase chunking in Persian which includes annotated data of Persian news text. Our experiments reveal that XLM-RoBERTa achieves the best performance between all the architectures tried on the proposed dataset. The results also show that using a single CRF layer would yield better results than a dense layer and even the combination of BiLSTM and CRF. UR - https://jad.shahroodut.ac.ir/article_2455.html L1 - https://jad.shahroodut.ac.ir/article_2455_bb18bb2f7d37ea3cb28427dd014d7074.pdf ER -