H.3. Artificial Intelligence
Mahdi Rasouli; Vahid Kiani
Abstract
The identification of emotions in short texts of low-resource languages poses a significant challenge, requiring specialized frameworks and computational intelligence techniques. This paper presents a comprehensive exploration of shallow and deep learning methods for emotion detection in short Persian ...
Read More
The identification of emotions in short texts of low-resource languages poses a significant challenge, requiring specialized frameworks and computational intelligence techniques. This paper presents a comprehensive exploration of shallow and deep learning methods for emotion detection in short Persian texts. Shallow learning methods employ feature extraction and dimension reduction to enhance classification accuracy. On the other hand, deep learning methods utilize transfer learning and word embedding, particularly BERT, to achieve high classification accuracy. A Persian dataset called "ShortPersianEmo" is introduced to evaluate the proposed methods, comprising 5472 diverse short Persian texts labeled in five main emotion classes. The evaluation results demonstrate that transfer learning and BERT-based text embedding perform better in accurately classifying short Persian texts than alternative approaches. The dataset of this study ShortPersianEmo will be publicly available online at https://github.com/vkiani/ShortPersianEmo.
H.3. Artificial Intelligence
Hassan Haji Mohammadi; Alireza Talebpour; Ahamd Mahmoudi Aznaveh; Samaneh Yazdani
Abstract
Coreference resolution is one of the essential tasks of natural languageprocessing. This task identifies all in-text expressions that refer to thesame entity in the real world. Coreference resolution is used in otherfields of natural language processing, such as information extraction,machine translation, ...
Read More
Coreference resolution is one of the essential tasks of natural languageprocessing. This task identifies all in-text expressions that refer to thesame entity in the real world. Coreference resolution is used in otherfields of natural language processing, such as information extraction,machine translation, and question-answering.This article presents a new coreference resolution corpus in Persiannamed Mehr corpus. The article's primary goal is to develop a Persiancoreference corpus that resolves some of the previous Persian corpus'sshortcomings while maintaining a high inter-annotator agreement. Thiscorpus annotates coreference relations for noun phrases, namedentities, pronouns, and nested named entities. Two baseline pronounresolution systems are developed, and the results are reported. Thecorpus size includes 400 documents and about 170k tokens. Corpusannotation is done by WebAnno preprocessing tool.
Document and Text Processing
A.R. Mazochi; S. Bourbour; M. R. Ghofrani; S. Momtazi
Abstract
Converting a postal address to a coordinate, geocoding, is a helpful tool in many applications. Developing a geocoder tool is a difficult task if this tool relates to a developing country that does not follow a standard addressing format. The lack of complete reference data and non-persistency of names ...
Read More
Converting a postal address to a coordinate, geocoding, is a helpful tool in many applications. Developing a geocoder tool is a difficult task if this tool relates to a developing country that does not follow a standard addressing format. The lack of complete reference data and non-persistency of names are the main challenges besides the common natural language process challenges. In this paper, we propose a geocoder for Persian addresses. To the best of our knowledge, our system, TehranGeocode, is the first geocoder for this language. Considering the non-standard structure of Persian addresses, we need to split the address into small segments, find each segment in the reference dataset, and connect them to find the target of the address. We develop our system based on address parsing and dynamic programming for this aim. We specify the contribution of our work compared to similar studies. We discuss the main components of the program, its data, and its results and show that the proposed framework achieves promising results in the field by finding 83\% of addresses with less than 300 meters error.
M. Asgari-Bidhendi; B. Janfada; O. R. Roshani Talab; B. Minaei-Bidgoli
Abstract
Named Entity Recognition (NER) is one of the essential prerequisites for many natural language processing tasks. All public corpora for Persian named entity recognition, such as ParsNERCorp and ArmanPersoNERCorpus, are based on the Bijankhan corpus, which is originated from the Hamshahri newspaper in ...
Read More
Named Entity Recognition (NER) is one of the essential prerequisites for many natural language processing tasks. All public corpora for Persian named entity recognition, such as ParsNERCorp and ArmanPersoNERCorpus, are based on the Bijankhan corpus, which is originated from the Hamshahri newspaper in 2004. Correspondingly, most of the published named entity recognition models in Persian are specially tuned for the news data and are not flexible enough to be applied in different text categories, such as social media texts. This study introduces ParsNER-Social, a corpus for training named entity recognition models in the Persian language built from social media sources. This corpus consists of 205,373 tokens and their NER tags, crawled from social media contents, including 10 Telegram channels in 10 different categories. Furthermore, three supervised methods are introduced and trained based on the ParsNER-Social corpus: Two conditional random field models as baseline models and one state-of-the-art deep learning model with six different configurations are evaluated on the proposed dataset. The experiments show that the Mono-Lingual Persian models based on Bidirectional Encoder Representations from Transformers (MLBERT) outperform the other approaches on the ParsNER-Social corpus. Among different Configurations of MLBERT models, the ParsBERT+BERT-TokenClass model obtained an F1-score of 89.65%.
H.3.8. Natural Language Processing
L. Jafar Tafreshi; F. Soltanzadeh
Abstract
Named Entity Recognition is an information extraction technique that identifies name entities in a text. Three popular methods have been conventionally used namely: rule-based, machine-learning-based and hybrid of them to extract named entities from a text. Machine-learning-based methods have good performance ...
Read More
Named Entity Recognition is an information extraction technique that identifies name entities in a text. Three popular methods have been conventionally used namely: rule-based, machine-learning-based and hybrid of them to extract named entities from a text. Machine-learning-based methods have good performance in the Persian language if they are trained with good features. To get good performance in Conditional Random Field-based Persian Named Entity Recognition, a several syntactic features based on dependency grammar along with some morphological and language-independent features have been designed in order to extract suitable features for the learning phase. In this implementation, designed features have been applied to Conditional Random Field to build our model. To evaluate our system, the Persian syntactic dependency Treebank with about 30,000 sentences, prepared in NOOR Islamic science computer research center, has been implemented. This Treebank has Named-Entity tags, such as Person, Organization and location. The result of this study showed that our approach achieved 86.86% precision, 80.29% recall and 83.44% F-measure which are relatively higher than those values reported for other Persian NER methods.
H.3.15.3. Evolutionary computing and genetic algorithms
H.R Keshavarz; M. Saniee Abadeh
Abstract
In Web 2.0, people are free to share their experiences, views, and opinions. One of the problems that arises in web 2.0 is the sentiment analysis of texts produced by users in outlets such as Twitter. One of main the tasks of sentiment analysis is subjectivity classification. Our aim is to classify the ...
Read More
In Web 2.0, people are free to share their experiences, views, and opinions. One of the problems that arises in web 2.0 is the sentiment analysis of texts produced by users in outlets such as Twitter. One of main the tasks of sentiment analysis is subjectivity classification. Our aim is to classify the subjectivity of Tweets. To this end, we create subjectivity lexicons in which the words into objective and subjective words. To create these lexicons, we make use of three metaheuristic methods. We extract two meta-level features, which show the count of objective and subjective words in tweets according to the lexicons. We then classify the tweets based on these two features. Our method outperforms the baselines in terms of accuracy and f-measure. In the three metaheuristics, it is observed that genetic algorithm performs better than simulated annealing and asexual reproduction optimization, and it also outperforms all the baselines in terms of accuracy in two of the three assessed datasets. The created lexicons also give insight about the objectivity and subjectivity of words.
H.8. Document and Text Processing
Sh. Rafieian; A. Baraani dastjerdi
Abstract
With due respect to the authors’ rights, plagiarism detection, is one of the critical problems in the field of text-mining that many researchers are interested in. This issue is considered as a serious one in high academic institutions. There exist language-free tools which do not yield any reliable ...
Read More
With due respect to the authors’ rights, plagiarism detection, is one of the critical problems in the field of text-mining that many researchers are interested in. This issue is considered as a serious one in high academic institutions. There exist language-free tools which do not yield any reliable results since the special features of every language are ignored in them. Considering the paucity of works in the field of Persian language due to lack of reliable plagiarism checkers in Persian there is a need for a method to improve the accuracy of detecting plagiarized Persian phrases. Attempt is made in the article to present the PCP solution. This solution is a combinational method that in addition to meaning and stem of words, synonyms and pluralization is dealt with by applying the document tree representation based on manner fingerprinting the text in the 3-grams words. The obtained grams are eliminated from the text, hashed through the BKDR hash function, and stored as the fingerprint of a document in fingerprints of reference documents repository, for checking suspicious documents. The PCP proposed method here is evaluated by eight experiments on seven different sets, which include suspicions document and the reference document, from the Hamshahri newspaper website. The results indicate that accuracy of this proposed method in detection of similar texts in comparison with "Winnowing" localized method has 21.15 percent is improvement average. The accuracy of the PCP method in detecting the similarity in comparison with the language-free tool reveals 31.65 percent improvement average.