Document and Text Processing
Mina Tabatabaei; Hossein Rahmani; Motahareh Nasiri
Abstract
The search for effective treatments for complex diseases, while minimizing toxicity and side effects, has become crucial. However, identifying synergistic combinations of drugs is often a time-consuming and expensive process, relying on trial and error due to the vast search space involved. Addressing ...
Read More
The search for effective treatments for complex diseases, while minimizing toxicity and side effects, has become crucial. However, identifying synergistic combinations of drugs is often a time-consuming and expensive process, relying on trial and error due to the vast search space involved. Addressing this issue, we present a deep learning framework in this study. Our framework utilizes a diverse set of features, including chemical structure, biomedical literature embedding, and biological network interaction data, to predict potential synergistic combinations. Additionally, we employ autoencoders and principal component analysis (PCA) for dimension reduction in sparse data. Through 10-fold cross-validation, we achieved an impressive 98 percent area under the curve (AUC), surpassing the performance of seven previous state-of-the-art approaches by an average of 8%.
Document and Text Processing
A.R. Mazochi; S. Bourbour; M. R. Ghofrani; S. Momtazi
Abstract
Converting a postal address to a coordinate, geocoding, is a helpful tool in many applications. Developing a geocoder tool is a difficult task if this tool relates to a developing country that does not follow a standard addressing format. The lack of complete reference data and non-persistency of names ...
Read More
Converting a postal address to a coordinate, geocoding, is a helpful tool in many applications. Developing a geocoder tool is a difficult task if this tool relates to a developing country that does not follow a standard addressing format. The lack of complete reference data and non-persistency of names are the main challenges besides the common natural language process challenges. In this paper, we propose a geocoder for Persian addresses. To the best of our knowledge, our system, TehranGeocode, is the first geocoder for this language. Considering the non-standard structure of Persian addresses, we need to split the address into small segments, find each segment in the reference dataset, and connect them to find the target of the address. We develop our system based on address parsing and dynamic programming for this aim. We specify the contribution of our work compared to similar studies. We discuss the main components of the program, its data, and its results and show that the proposed framework achieves promising results in the field by finding 83\% of addresses with less than 300 meters error.
Document and Text Processing
S. Momtazi; A. Rahbar; D. Salami; I. Khanijazani
Abstract
Text clustering and classification are two main tasks of text mining. Feature selection plays the key role in the quality of the clustering and classification results. Although word-based features such as term frequency-inverse document frequency (TF-IDF) vectors have been widely used in different applications, ...
Read More
Text clustering and classification are two main tasks of text mining. Feature selection plays the key role in the quality of the clustering and classification results. Although word-based features such as term frequency-inverse document frequency (TF-IDF) vectors have been widely used in different applications, their shortcoming in capturing semantic concepts of text motivated researches to use semantic models for document vector representations. Latent Dirichlet allocation (LDA) topic modeling and doc2vec neural document embedding are two well-known techniques for this purpose. In this paper, we first study the conceptual difference between the two models and show that they have different behavior and capture semantic features of texts from different perspectives. We then proposed a hybrid approach for document vector representation to benefit from the advantages of both models. The experimental results on 20newsgroup show the superiority of the proposed model compared to each of the baselines on both text clustering and classification tasks. We achieved 2.6% improvement in F-measure for text clustering and 2.1% improvement in F-measure in text classification compared to the best baseline model.
Document and Text Processing
A. Shojaie; F. Safi-Esfahani
Abstract
With the advent of the internet and easy access to digital libraries, plagiarism has become a major issue. Applying search engines is one of the plagiarism detection techniques that converts plagiarism patterns to search queries. Generating suitable queries is the heart of this technique and existing ...
Read More
With the advent of the internet and easy access to digital libraries, plagiarism has become a major issue. Applying search engines is one of the plagiarism detection techniques that converts plagiarism patterns to search queries. Generating suitable queries is the heart of this technique and existing methods suffer from lack of producing accurate queries, Precision and Speed of retrieved results. This research proposes a framework called ParaMaker. It generates accurate paraphrases of any sentence, similar to human behaviors and sends them to a search engine to find the plagiarism patterns. For English language, ParaMaker was examined against six known methods with standard PAN2014 datasets. Results showed an improvement of 34% in terms of Recall parameter while Precision and Speed parameters were maintained. In Persian language, statements of suspicious documents were examined compared to an exact search approach. ParaMaker showed an improvement of at least 42% while Precision and Speed were maintained.
Document and Text Processing
A. Ahmadi Tameh; M. Nassiri; M. Mansoorizadeh
Abstract
WordNet is a large lexical database of English language, in which, nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms (synsets). Each synset expresses a distinct concept. Synsets are interlinked by both semantic and lexical relations. WordNet is essentially used for word ...
Read More
WordNet is a large lexical database of English language, in which, nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms (synsets). Each synset expresses a distinct concept. Synsets are interlinked by both semantic and lexical relations. WordNet is essentially used for word sense disambiguation, information retrieval, and text translation. In this paper, we propose several automatic methods to extract Information and Communication Technology (ICT)-related data from Princeton WordNet. We, then, add these extracted data to our Persian WordNet. The advantage of automated methods is reducing the interference of human factors and accelerating the development of our bilingual ICT WordNet. In our first proposed method, based on a small subset of ICT words, we use the definition of each synset to decide whether that synset is ICT. The second mechanism is to extract synsets which are in a semantic relation with ICT synsets. We also use two similarity criteria, namely LCS and S3M, to measure the similarity between a synset definition in WordNet and definition of any word in Microsoft dictionary. Our last method is to verify the coordinate of ICT synsets. Results show that our proposed mechanisms are able to extract ICT data from Princeton WordNet at a good level of accuracy.
Document and Text Processing
N. Nazari; M. A. Mahdavi
Abstract
Text summarization endeavors to produce a summary version of a text, while maintaining the original ideas. The textual content on the web, in particular, is growing at an exponential rate. The ability to decipher through such massive amount of data, in order to extract the useful information, is a major ...
Read More
Text summarization endeavors to produce a summary version of a text, while maintaining the original ideas. The textual content on the web, in particular, is growing at an exponential rate. The ability to decipher through such massive amount of data, in order to extract the useful information, is a major undertaking and requires an automatic mechanism to aid with the extant repository of information. Text summarization systems intent to assist with content reduction by means of keeping the relevant information and filtering the non-relevant parts of the text. In terms of the input, there are two fundamental approaches among text summarization systems. The first approach summarizes a single document. In other words, the system takes one document as an input and produce a summary version as its output. The alternative approach is to take several documents as its input and produce a single summary document as its output. In terms of output, summarization systems are also categorized into two major types. One approach would be to extract exact sentences from the original document in order to build the summary output. The alternative would be a more complex approach, in which the rendered text is a rephrased version of the original document. This paper will offer an in-depth introduction to automatic text summarization. We also mention some evaluation techniques to evaluate the quality of automatic text summarization.
Document and Text Processing
A. Pouramini; S. Khaje Hassani; Sh. Nasiri
Abstract
In this paper, we present an approach and a visual tool, called HWrap (Handle Based Wrapper), for creating web wrappers to extract data records from web pages. In our approach, we mainly rely on the visible page content to identify data regions on a web page. In our extraction algorithm, we inspired ...
Read More
In this paper, we present an approach and a visual tool, called HWrap (Handle Based Wrapper), for creating web wrappers to extract data records from web pages. In our approach, we mainly rely on the visible page content to identify data regions on a web page. In our extraction algorithm, we inspired by the way a human user scans the page content for specific data. In particular, we use text features such as textual delimiters, keywords, constants or text patterns, which we call handles, to construct patterns for the target data regions and data records. We offer a polynomial algorithm, in which these patterns are checked against the page elements in a mixed bottom-up and top-down traverse of the DOM-tree. The extracted data is directly mapped onto a hierarchical XML structure, which forms the output of the wrapper. The wrappers that are generated by this method are robust and independent of the HTML structure. Therefore, they can be adapted to similar websites to gather and integrate information.
Document and Text Processing
F. Safi-Esfahani; Sh. Rakian; M.H. Nadimi-Shahraki
Abstract
Plagiarism which is defined as “the wrongful appropriation of other writers’ or authors’ works and ideas without citing or informing them” poses a major challenge to knowledge spread publication. Plagiarism has been placed in four categories of direct, paraphrasing (rewriting), ...
Read More
Plagiarism which is defined as “the wrongful appropriation of other writers’ or authors’ works and ideas without citing or informing them” poses a major challenge to knowledge spread publication. Plagiarism has been placed in four categories of direct, paraphrasing (rewriting), translation, and combinatory. This paper addresses translational plagiarism which is sometimes referred to as cross-lingual plagiarism. In cross-lingual translation, writers meld a translation with their own words and ideas. Based on monolingual plagiarism detection methods, this paper ultimately intends to find a way to detect cross-lingual plagiarism. A framework called Multi-Lingual Plagiarism Detection (MLPD) has been presented for cross-lingual plagiarism analysis with ultimate objective of detection of plagiarism cases. English is the reference language and Persian materials are back translated using translation tools. The data for assessment of MLPD were obtained from English-Persian Mizan parallel corpus. Apache’s Solr was also applied to record the creep of the documents and their indexation. The accuracy mean of the proposed method revealed to be 98.82% when employing highly accurate translation tools which indicate the high accuracy of the proposed method. Also, Google translation service showed the accuracy mean to be 56.9%. These tests demonstrate that improved translation tools enhance the accuracy of the proposed method.