H.3.2.10. Medicine and science
Fahimeh Hafezi; Maryam Khodabakhsh
Abstract
Coronavirus disease as a persistent epidemic of acute respiratory syndrome posed a challenge to global healthcare systems. Many people have been forced to stay in their homes due to unprecedented quarantine practices around the world. Since most people used social media during the Coronavirus epidemic, ...
Read More
Coronavirus disease as a persistent epidemic of acute respiratory syndrome posed a challenge to global healthcare systems. Many people have been forced to stay in their homes due to unprecedented quarantine practices around the world. Since most people used social media during the Coronavirus epidemic, analyzing the user-generated social content can provide new insights and be a clue to track changes and their occurrence over time. An active area in this space is the prediction of new infected cases from Coronavirus-generated social content. Identifying the social content that relates to Coronavirus is a challenging task because a significant number of posts contain Coronavirus-related content but do not include hashtags or Corona-related words. Conversely, posts that have the hashtag or the word Corona but are not really related to the meaning of Coronavirus and are mostly promotional. In this paper, we propose a semantic approach based on word embedding techniques to model Corona and then introduce a new feature namely semantic similarity to measure the similarity of a given post to Corona in semantic space. Furthermore, we propose two other features namely fear emotion and hope feeling to identify the Coronavirus-related posts. These features are used as statistical indicators in a regression model to estimate the new infected cases. We evaluate our features on the Persian dataset of Instagram posts, which was collected in the first wave of Coronavirus, and demonstrate that the consideration of the proposed features will lead to improved performance of the Coronavirus incidence rate estimation.
K. Kiani; R. Hematpour; R. Rastgoo
Abstract
Image colorization is an interesting yet challenging task due to the descriptive nature of getting a natural-looking color image from any grayscale image. To tackle this challenge and also have a fully automatic procedure, we propose a Convolutional Neural Network (CNN)-based model to benefit from the ...
Read More
Image colorization is an interesting yet challenging task due to the descriptive nature of getting a natural-looking color image from any grayscale image. To tackle this challenge and also have a fully automatic procedure, we propose a Convolutional Neural Network (CNN)-based model to benefit from the impressive ability of CNN in the image processing tasks. To this end, we propose a deep-based model for automatic grayscale image colorization. Harnessing from convolutional-based pre-trained models, we fuse three pre-trained models, VGG16, ResNet50, and Inception-v2, to improve the model performance. The average of three model outputs is used to obtain more rich features in the model. The fused features are fed to an encoder-decoder network to obtain a color image from a grayscale input image. We perform a step-by-step analysis of different pre-trained models and fusion methodologies to include a more accurate combination of these models in the proposed model. Results on LFW and ImageNet datasets confirm the effectiveness of our model compared to state-of-the-art alternatives in the field.
Z. Falahiazar; A.R. Bagheri; M. Reshadi
Abstract
Spatio-temporal (ST) clustering is a relatively new field in data mining with great popularity, especially in geographic information. Moving objects are a type of ST data where the available information on these objects includes their last position. The strategy of performing the clustering operation ...
Read More
Spatio-temporal (ST) clustering is a relatively new field in data mining with great popularity, especially in geographic information. Moving objects are a type of ST data where the available information on these objects includes their last position. The strategy of performing the clustering operation on all-time sequences is used for clustering moving objects. The problem with density-based clustering, which uses this strategy, is that the density of clusters may change at any point in time because of the displacement of points. Hence, the input parameters of an algorithm like DBSCAN used to cluster moving objects will change and have to be determined again. The DBSCAN-based methods have been proposed so far, assuming that the value of input parameters is fixed over time and does not provide a solution for their automatic determination. Nonetheless, with the objects moving and the density of the clusters changing, these parameters have to be determined appropriately again at each time interval. The paper used a dynamic multi-objective genetic algorithm to determine the parameters of the DBSCAN algorithm dynamically and automatically to solve this problem. The proposed algorithm in each time interval uses the clustering information of the previous time interval to determine the parameters. Beijing traffic control data was used as a moving dataset to evaluate the proposed algorithm. The experiments show that using the proposed algorithm for dynamic determination of DBSCAN input parameters outperforms DBSCAN with fixed input parameters over time in terms of the Silhouette and Outlier indices.
Z. Shahpar; V. Khatibi; A. Khatibi Bardsiri
Abstract
Software effort estimation plays an important role in software project management, and analogy-based estimation (ABE) is the most common method used for this purpose. ABE estimates the effort required for a new software project based on its similarity to previous projects. A similarity between the projects ...
Read More
Software effort estimation plays an important role in software project management, and analogy-based estimation (ABE) is the most common method used for this purpose. ABE estimates the effort required for a new software project based on its similarity to previous projects. A similarity between the projects is evaluated based on a set of project features, each of which has a particular effect on the degree of similarity between projects and the effort feature. The present study examines the hybrid PSO-SA approach for feature weighting in analogy-based software project effort estimation. The proposed approach was implemented and tested on two well-known datasets of software projects. The performance of the proposed model was compared with other optimization algorithms based on MMRE, MDMRE, and PRED(0.25) measures. The results showed that weighted ABE models provide more accurate and better effort estimates relative to unweighted ABE models and that the PSO-SA hybrid approach has led to better and more accurate results compared with the other weighting approaches in both datasets.
B.3. Communication/Networking and Information Technology
Newsha Nowrozian; Farzad Tashtarian; Yahya Forghani
Abstract
Wireless rechargeable sensor networks (WRSNs) are widely used in many fields. However, the limited battery capacity of sensor nodes (SNs) prevents its development. To extend the battery life of SNs, they can be charged by a mobile charger (MC) equipped with radio frequency-based wireless power transfer ...
Read More
Wireless rechargeable sensor networks (WRSNs) are widely used in many fields. However, the limited battery capacity of sensor nodes (SNs) prevents its development. To extend the battery life of SNs, they can be charged by a mobile charger (MC) equipped with radio frequency-based wireless power transfer (WPT). The paper addressed the issue of optimizing route planning and charging based on an MC with directional charging in on-demand networks. A mixed integer linear programming model (MILP) is proposed to obtain the appropriate stopping points (SPs) and orientation charging angles to respond to input requests in the shortest possible time and with minimum energy consumption. First, to select the SPs and the orientation charging direction, we utilize a clustering and discretization technique while minimizing the number of SPs and maximizing the charging cover. Then, to decrease the charging time of the required SNs as well as the MC's energy consumption, we propose a heuristic search algorithm for adjusting the moving path for the directional mobile charger. Finally, experimental simulations are performed to evaluate the performance of the proposed directional charging scheduling algorithm, and the results reveal that the suggested approach outperforms existing studies in terms of MC energy consumption, charging delay, and distance traveled.
N. Elyasi; M. Hosseini Moghadam
Abstract
In this paper, we use the topological data analysis (TDA) mapper algorithm alongside a deep convolutional neural network in order to classify some medical images.Deep learning models and convolutional neural networks can capture the Euclidean relation of a data point with its neighbor data points like ...
Read More
In this paper, we use the topological data analysis (TDA) mapper algorithm alongside a deep convolutional neural network in order to classify some medical images.Deep learning models and convolutional neural networks can capture the Euclidean relation of a data point with its neighbor data points like the pixels of an image and they are particularly good at modeling data structures that live in the Euclidean space and not effective at modeling data structures that live in the non-Euclidean spaces. Topological data analysis-based methods have the ability to not only extract the Euclidean, but also topological features of data.For the first time in this paper, we apply a neural network as one of the filter steps of the Kepler mapper algorithm to classify skin cancer images. The major advantage of this method is that Kepler Mapper visualizes the classification result by a simplicial complex, where neural network increases the accuracy of classification. Furthermore, we apply TDA mapper and persistent homology algorithms to analyze the layers of Xception network in different training epochs. Also, we use persistent diagrams to visualize the results of the analysis of layers of the Xception network and then compare them by Wasserstein distances.
A. Hasan-Zadeh; F. Asadi; N. Garbazkar
Abstract
For an economic review of food prices in May 2019 to determine the trend of rising or decreasing prices compared to previous periods, we considered the price of food items at that time. The types of items consumed during specific periods in urban areas and the whole country are selected for our statistical ...
Read More
For an economic review of food prices in May 2019 to determine the trend of rising or decreasing prices compared to previous periods, we considered the price of food items at that time. The types of items consumed during specific periods in urban areas and the whole country are selected for our statistical analysis. Among the various methods of modelling and statistical prediction, and in a new approach, we modeled the data using data mining techniques consisting of decision tree methods, associative rules, and Bayesian law. Then, prediction, validation, and standardization of the accuracy of the validation are performed on them. Results of data validation in the urban and national area and the results of the standardization of the accuracy of validation in the urban and national area are presented with the desired accuracy.
H.6.3.3. Pattern analysis
Meysam Roostaee; Razieh Meidanshahi
Abstract
In this study, we sought to minimize the need for redundant blood tests in diagnosing common diseases by leveraging unsupervised data mining techniques on a large-scale dataset of over one million patients' blood test results. We excluded non-numeric and subjective data to ensure precision. To identify ...
Read More
In this study, we sought to minimize the need for redundant blood tests in diagnosing common diseases by leveraging unsupervised data mining techniques on a large-scale dataset of over one million patients' blood test results. We excluded non-numeric and subjective data to ensure precision. To identify relationships between attributes, we applied a suite of unsupervised methods including preprocessing, clustering, and association rule mining. Our approach uncovered correlations that enable healthcare professionals to detect potential acute diseases early, improving patient outcomes and reducing costs. The reliability of our extracted patterns also suggest that this approach can lead to significant time and cost savings while reducing the workload for laboratory personnel. Our study highlights the importance of big data analytics and unsupervised learning techniques in increasing efficiency in healthcare centers.
F.2. Numerical Analysis
S. Sareminia
Abstract
In recent years, the occurrence of various pandemics (COVID-19, SARS, etc.) and their widespread impact on human life have led researchers to focus on their pathology and epidemiology components. One of the most significant inconveniences of these epidemics is the human mortality rate, which has highly ...
Read More
In recent years, the occurrence of various pandemics (COVID-19, SARS, etc.) and their widespread impact on human life have led researchers to focus on their pathology and epidemiology components. One of the most significant inconveniences of these epidemics is the human mortality rate, which has highly social adverse effects. This study, in addition to major attributes affecting the COVID-19 mortality rate (Health factors, people-health status, and climate) considers the social and economic components of societies. These components have been extracted from the countries’ Human Development Index (HDI) and the effect of the level of social development on the mortality rate has been investigated using ensemble data mining methods. The results indicate that the level of community education has the highest effect on the disease mortality rate. In a way, the extent of its effect is much higher than environmental factors such as air temperature, regional health factors, and community welfare. This factor is probably due to the ability of knowledge-based societies to manage the crises, their attention to health advisories, lower involvement of rumors, and consequently lower incidence of mental health problems. This study shows the impact of education on reducing the severity of the crisis in communities and opens a new window in terms of cultural and social factors in the interpretation of medical data. Furthermore, according to the results and comparing different types of single and ensemble data mining methods, the application of the ensemble method in terms of classification accuracy and prediction error has the best result.
Oladosu Oladimeji; Olayanju Oladimeji
Abstract
Breast cancer is the second major cause of death and accounts for 16% of all cancer deaths worldwide. Most of the methods of detecting breast cancer are very expensive and difficult to interpret such as mammography. There are also limitations such as cumulative radiation exposure, over-diagnosis, false ...
Read More
Breast cancer is the second major cause of death and accounts for 16% of all cancer deaths worldwide. Most of the methods of detecting breast cancer are very expensive and difficult to interpret such as mammography. There are also limitations such as cumulative radiation exposure, over-diagnosis, false positives and negatives in women with a dense breast which pose certain uncertainties in high-risk population. The objective of this study is Detecting Breast Cancer Through Blood Analysis Data Using Classification Algorithms. This will serve as a complement to these expensive methods. High ranking features were extracted from the dataset. The KNN, SVM and J48 algorithms were used as the training platform to classify 116 instances. Furthermore, 10-fold cross validation and holdout procedures were used coupled with changing of random seed. The result showed that KNN algorithm has the highest and best accuracy of 89.99% and 85.21% for cross validation and holdout procedure respectively. This is followed by the J48 with 84.65% and 75.65% for the two procedures respectively. SVM had 77.58% and 68.69% respectively. Although it was also discovered that Blood Glucose level is a major determinant in detecting breast cancer, it has to be combined with other attributes to make decision as a result of other health issues like diabetes. With the result obtained, women are advised to do regular check-ups including blood analysis in order to know which of the blood components need to be worked on to prevent breast cancer based on the model generated in this study.
H.3. Artificial Intelligence
Saheb Ghanbari Motlagh; Fateme Razi Astaraei; Mojtaba Hajihosseini; Saeed Madani
Abstract
This study explores the potential use of Machine Learning (ML) techniques to enhance three types of nano-based solar cells. Perovskites of methylammonium-free formamidinium (FA) and mixed cation-based cells exhibit a boosted efficiency when employing ML techniques. Moreover, ML methods are utilized to ...
Read More
This study explores the potential use of Machine Learning (ML) techniques to enhance three types of nano-based solar cells. Perovskites of methylammonium-free formamidinium (FA) and mixed cation-based cells exhibit a boosted efficiency when employing ML techniques. Moreover, ML methods are utilized to identify optimal donor complexes, high blind temperature materials, and to advance the thermodynamic stability of perovskites. Another significant application of ML in dye-sensitized solar cells (DSSCs) is the detection of novel dyes, solvents, and molecules for improving the efficiency and performance of solar cells. Some of these materials have increased cell efficiency, short-circuit current, and light absorption by more than 20%. ML algorithms to fine-tune network and plasmonic field bandwidths improve the efficiency and light absorption of surface plasmonic resonance (SPR) solar cells. This study outlines the potential of ML techniques to optimize and improve the development of nano-based solar cells, leading to promising results for the field of solar energy generation and supporting the demand for sustainable and dependable energy.
A. Hadian; M. Bagherian; B. Fathi Vajargah
Abstract
Background: One of the most important concepts in cloud computing is modeling the problem as a multi-layer optimization problem which leads to cost savings in designing and operating the networks. Previous researchers have modeled the two-layer network operating problem as an Integer Linear Programming ...
Read More
Background: One of the most important concepts in cloud computing is modeling the problem as a multi-layer optimization problem which leads to cost savings in designing and operating the networks. Previous researchers have modeled the two-layer network operating problem as an Integer Linear Programming (ILP) problem, and due to the computational complexity of solving it jointly, they suggested a two-stage procedure for solving it by considering one layer at each stage.Aim: In this paper, considering the ILP model and using some properties of it, we propose a heuristic algorithm for solving the model jointly, considering unicast, multicast, and anycast flows simultaneously. Method: We first sort demands in decreasing order and use a greedy method to realize demands in order. Due to the high computational complexity of ILP model, the proposed heuristic algorithm is suitable for networks with a large number of nodes; In this regard, various examples are solved by CPLEX and MATLAB soft wares. Results: Our simulation results show that for small values of M and N CPLEX fails to find the optimal solution, while AGA finds a near-optimal solution quickly.Conclusion: The proposed greedy algorithm could solve the large-scale networks approximately in polynomial time and its approximation is reasonable.
Seyedeh R. Mahmudi Nezhad Dezfouli; Y. Kyani; Seyed A. Mahmoudinejad Dezfouli
Abstract
Due to the small size, low contrast, variable position, shape, and texture of multiple sclerosis lesions, one of the challenges of medical image processing is the automatic diagnosis and segmentation of multiple sclerosis lesions in Magnetic resonance images. Early diagnosis of these lesions in the first ...
Read More
Due to the small size, low contrast, variable position, shape, and texture of multiple sclerosis lesions, one of the challenges of medical image processing is the automatic diagnosis and segmentation of multiple sclerosis lesions in Magnetic resonance images. Early diagnosis of these lesions in the first stages of the disease can effectively diagnose and evaluate treatment. Also, automated segmentation is a powerful tool to assist professionals in improving the accuracy of disease diagnosis. This study uses modified adaptive multi-level conditional random fields and the artificial neural network to segment and diagnose multiple sclerosis lesions. Instead of assuming model coefficients as constant, they are considered variables in multi-level statistical models. This study aimed to evaluate the probability of lesions based on the severity, texture, and adjacent areas. The proposed method is applied to 130 MR images of multiple sclerosis patients in two test stages and resulted in 98% precision. Also, the proposed method has reduced the error detection rate by correcting the lesion boundaries using the average intensity of neighborhoods, rotation invariant, and texture for very small voxels with a size of 3-5 voxels, and it has shown very few false-positive lesions. The proposed model resulted in a high sensitivity of 91% with a false positive average of 0.5.
L. Falahiazar; V. Seydi; M. Mirzarezaee
Abstract
Many of the real-world issues have multiple conflicting objectives that the optimization between contradictory objectives is very difficult. In recent years, the Multi-objective Evolutionary Algorithms (MOEAs) have shown great performance to optimize such problems. So, the development of MOEAs will always ...
Read More
Many of the real-world issues have multiple conflicting objectives that the optimization between contradictory objectives is very difficult. In recent years, the Multi-objective Evolutionary Algorithms (MOEAs) have shown great performance to optimize such problems. So, the development of MOEAs will always lead to the advancement of science. The Non-dominated Sorting Genetic Algorithm II (NSGAII) is considered as one of the most used evolutionary algorithms, and many MOEAs have emerged to resolve NSGAII problems, such as the Sequential Multi-Objective Algorithm (SEQ-MOGA). SEQ-MOGA presents a new survival selection that arranges individuals systematically, and the chromosomes can cover the entire Pareto Front region. In this study, the Archive Sequential Multi-Objective Algorithm (ASMOGA) is proposed to develop and improve SEQ-MOGA. ASMOGA uses the archive technique to save the history of the search procedure, so that the maintenance of the diversity in the decision space is satisfied adequately. To demonstrate the performance of ASMOGA, it is used and compared with several state-of-the-art MOEAs for optimizing benchmark functions and designing the I-Beam problem. The optimization results are evaluated by Performance Metrics such as hypervolume, Generational Distance, Spacing, and the t-test (a statistical test); based on the results, the superiority of the proposed algorithm is identified clearly.
H.3.8. Natural Language Processing
P. Kavehzadeh; M. M. Abdollah Pour; S. Momtazi
Abstract
Over the last few years, text chunking has taken a significant part in sequence labeling tasks. Although a large variety of methods have been proposed for shallow parsing in English, most proposed approaches for text chunking in Persian language are based on simple and traditional concepts. In this paper, ...
Read More
Over the last few years, text chunking has taken a significant part in sequence labeling tasks. Although a large variety of methods have been proposed for shallow parsing in English, most proposed approaches for text chunking in Persian language are based on simple and traditional concepts. In this paper, we propose using the state-of-the-art transformer-based contextualized models, namely BERT and XLM-RoBERTa, as the major structure of our models. Conditional Random Field (CRF), the combination of Bidirectional Long Short-Term Memory (BiLSTM) and CRF, and a simple dense layer are employed after the transformer-based models to enhance the model's performance in predicting chunk labels. Moreover, we provide a new dataset for noun phrase chunking in Persian which includes annotated data of Persian news text. Our experiments reveal that XLM-RoBERTa achieves the best performance between all the architectures tried on the proposed dataset. The results also show that using a single CRF layer would yield better results than a dense layer and even the combination of BiLSTM and CRF.
H.3. Artificial Intelligence
Amirhossein Khabbaz; Mansoor Fateh; Ali Pouyan; Mohsen Rezvani
Abstract
Autism spectrum disorder (ASD) is a collection of inconstant characteristics. Anomalies in reciprocal social communications and disabilities in perceiving communication patterns characterize These features. Also, exclusive repeated interests and actions identify ASD. Computer games have affirmative effects ...
Read More
Autism spectrum disorder (ASD) is a collection of inconstant characteristics. Anomalies in reciprocal social communications and disabilities in perceiving communication patterns characterize These features. Also, exclusive repeated interests and actions identify ASD. Computer games have affirmative effects on autistic children. Serious games have been widely used to elevate the ability to communicate with other individuals in these children. In this paper, we propose an adaptive serious game to rate the social skills of autistic children. The proposed serious game employs a reinforcement learning mechanism to learn such ratings adaptively for the players. It uses fuzzy logic to estimate the communication skills of autistic children. The game adapts itself to the level of the child with autism. For that matter, it uses an intelligent agent to tune the challenges through playtime. To dynamically evaluate the communication skills of these children, the game challenges may grow harder based on the development of a child's skills through playtime. We also employ fuzzy logic to estimate the playing abilities of the player periodically. Fifteen autistic children participated in experiments to evaluate the presented serious game. The experimental results show that the proposed method is effective in the communication skill of autistic children.
N. Nowrozian; F. Tashtarian
Abstract
Battery power limitation of sensor nodes (SNs) is a major challenge for wireless sensor networks (WSNs) which affects network survival. Thus, optimizing the energy consumption of the SNs as well as increasing the lifetime of the SNs and thus, extending the lifetime of WSNs are of crucial importance in ...
Read More
Battery power limitation of sensor nodes (SNs) is a major challenge for wireless sensor networks (WSNs) which affects network survival. Thus, optimizing the energy consumption of the SNs as well as increasing the lifetime of the SNs and thus, extending the lifetime of WSNs are of crucial importance in these types of networks. Mobile chargers (MCs) and wireless power transfer (WPT) technologies have played an important long role in WSNs, and much research has been done on how to use the MC to enhance the performance of WSNs in recent decades. In this paper, we first review the application of MCs and WPT technologies in WSNs. Then, forwarding issues the MC has been considered in the role of power transmitter in WSNs and the existing approaches are categorized, with the purposes and limitations of MC dispatching studied. Then an overview of the existing articles is presented and to better understand the contents, tables and figures are offered that summarize the existing methods. We examine them in different dimensions such as advantages and disadvantages etc. Finally, the future prospects of MC are discussed.
S. Ghandibidgoli; H. Mokhtari
Abstract
In many applications of the robotics, the mobile robot should be guided from a source to a specific destination. The automatic control and guidance of a mobile robot is a challenge in the context of robotics. So, in current paper, this problem is studied using various machine learning methods. Controlling ...
Read More
In many applications of the robotics, the mobile robot should be guided from a source to a specific destination. The automatic control and guidance of a mobile robot is a challenge in the context of robotics. So, in current paper, this problem is studied using various machine learning methods. Controlling a mobile robot is to help it to make the right decision about changing direction according to the information read by the sensors mounted around waist of the robot. Machine learning methods are trained using 3 large datasets read by the sensors and obtained from machine learning database of UCI. The employed methods include (i) discriminators: greedy hypercube classifier and support vector machines, (ii) parametric approaches: Naive Bayes’ classifier with and without dimensionality reduction methods, (iii) semiparametric algorithms: Expectation-Maximization algorithm (EM), C-means, K-means, agglomerative clustering, (iv) nonparametric approaches for defining the density function: histogram and kernel estimators, (v) nonparametric approaches for learning: k-nearest neighbors and decision tree and (vi) Combining Multiple Learners: Boosting and Bagging. These methods are compared based on various metrics. Computational results indicate superior performance of the implemented methods compared to the previous methods using the mentioned dataset. In general, Boosting, Bagging, Unpruned Tree and Pruned Tree (θ = 10-7) have given better results compared to the existing results. Also the efficiency of the implemented decision tree is better than the other employed methods and this method improves the classification precision, TP-rate, FP- rate and MSE of the classes by 0.1%, 0.1%, 0.001% and 0.001%.
H.3.7. Learning
Laleh Armi; Elham Abbasi
Abstract
In this paper, we propose an innovative classification method for tree bark classification and tree species identification. The proposed method consists of two steps. In the first step, we take the advantages of ILQP, a rotationally invariant, noise-resistant, and fully descriptive color texture feature ...
Read More
In this paper, we propose an innovative classification method for tree bark classification and tree species identification. The proposed method consists of two steps. In the first step, we take the advantages of ILQP, a rotationally invariant, noise-resistant, and fully descriptive color texture feature extraction method. Then, in the second step, a new classification method called stacked mixture of ELM-based experts with a trainable gating network (stacked MEETG) is proposed. The proposed method is evaluated using the Trunk12, BarkTex, and AFF datasets. The performance of the proposed method on these three bark datasets shows that our approach provides better accuracy than other state-of-the-art methods.Our proposed method achieves an average classification accuracy of 92.79% (Trunk12), 92.54% (BarkTex), and 91.68% (AFF), respectively. Additionally, the results demonstrate that ILQP has better texture feature extraction capabilities than similar methods such as ILTP. Furthermore, stacked MEETG has shown a great influence on the classification accuracy.
N. Esfandian; F. Jahani bahnamiri; S. Mavaddati
Abstract
This paper proposes a novel method for voice activity detection based on clustering in spectro-temporal domain. In the proposed algorithms, auditory model is used to extract the spectro-temporal features. Gaussian Mixture Model and WK-means clustering methods are used to decrease dimensions of the spectro-temporal ...
Read More
This paper proposes a novel method for voice activity detection based on clustering in spectro-temporal domain. In the proposed algorithms, auditory model is used to extract the spectro-temporal features. Gaussian Mixture Model and WK-means clustering methods are used to decrease dimensions of the spectro-temporal space. Moreover, the energy and positions of clusters are used for voice activity detection. Silence/speech is recognized using the attributes of clusters and the updated threshold value in each frame. Having higher energy, the first cluster is used as the main speech section in computation. The efficiency of the proposed method was evaluated for silence/speech discrimination in different noisy conditions. Displacement of clusters in spectro-temporal domain was considered as the criteria to determine robustness of features. According to the results, the proposed method improved the speech/non-speech segmentation rate in comparison to temporal and spectral features in low signal to noise ratios (SNRs).
F. Rismanian Yazdi; M. Hosseinzadeh; S. Jabbehdari
Abstract
Wireless body area networks (WBAN) are innovative technologies that have been the anticipation greatly promote healthcare monitoring systems. All WBAN included biomedical sensors that can be worn on or implanted in the body. Sensors are monitoring vital signs and then processing the data and transmitting ...
Read More
Wireless body area networks (WBAN) are innovative technologies that have been the anticipation greatly promote healthcare monitoring systems. All WBAN included biomedical sensors that can be worn on or implanted in the body. Sensors are monitoring vital signs and then processing the data and transmitting to the central server. Biomedical sensors are limited in energy resources and need an improved design for managing energy consumption. Therefore, DTEC-MAC (Diverse Traffic with Energy Consumption-MAC) is proposed based on the priority of data classification in the cluster nodes and provides medical data based on energy management. The proposed method uses fuzzy logic based on the distance to sink and the remaining energy and length of data to select the cluster head. MATLAB software was used to simulate the method. This method compared with similar methods called iM-SIMPLE and M-ATTEMPT, ERP. Results of the simulations indicate that it works better to extend the lifetime and guarantee minimum energy and packet delivery rates, maximizing the throughput.
H.3. Artificial Intelligence
Hassan Haji Mohammadi; Alireza Talebpour; Ahamd Mahmoudi Aznaveh; Samaneh Yazdani
Abstract
Coreference resolution is one of the essential tasks of natural languageprocessing. This task identifies all in-text expressions that refer to thesame entity in the real world. Coreference resolution is used in otherfields of natural language processing, such as information extraction,machine translation, ...
Read More
Coreference resolution is one of the essential tasks of natural languageprocessing. This task identifies all in-text expressions that refer to thesame entity in the real world. Coreference resolution is used in otherfields of natural language processing, such as information extraction,machine translation, and question-answering.This article presents a new coreference resolution corpus in Persiannamed Mehr corpus. The article's primary goal is to develop a Persiancoreference corpus that resolves some of the previous Persian corpus'sshortcomings while maintaining a high inter-annotator agreement. Thiscorpus annotates coreference relations for noun phrases, namedentities, pronouns, and nested named entities. Two baseline pronounresolution systems are developed, and the results are reported. Thecorpus size includes 400 documents and about 170k tokens. Corpusannotation is done by WebAnno preprocessing tool.
Z. Imanimehr
Abstract
Peer-to-peer video streaming has reached great attention during recent years. Video streaming in peer-to-peer networks is a good way to stream video on the Internet due to the high scalability, high video quality, and low bandwidth requirements. In this paper the issue of live video streaming in peer-to-peer ...
Read More
Peer-to-peer video streaming has reached great attention during recent years. Video streaming in peer-to-peer networks is a good way to stream video on the Internet due to the high scalability, high video quality, and low bandwidth requirements. In this paper the issue of live video streaming in peer-to-peer networks which contain selfish peers is addressed. To encourage peers to cooperate in video distribution, tokens are used as an internal currency. Tokens are gained by peers when they accept requests from other peers to upload video chunks to them, and tokens are spent when sending requests to other peers to download video chunks from them. To handle the heterogeneity in the bandwidth of peers, the assumption has been made that the video is coded as multi-layered. For each layer the same token has been used, but priced differently per layer. Based on the available token pools, peers can request various qualities. A new token-based incentive mechanism has been proposed, which adapts the admission control policy of peers according to the dynamics of the request submission, request arrival, time to send requests, and bandwidth availability processes. Peer-to-peer requests could arrive at any time, so the continuous Markov Decision Process has been used.
M. Gordan; Saeed R. Sabbagh-Yazdi; Z. Ismail; Kh. Ghaedi; H. Hamad Ghayeb
Abstract
A structural health monitoring system contains two components, i.e. a data collection approach comprising a network of sensors for recording the structural responses as well as an extraction methodology in order to achieve beneficial information on the structural health condition. In this regard, data ...
Read More
A structural health monitoring system contains two components, i.e. a data collection approach comprising a network of sensors for recording the structural responses as well as an extraction methodology in order to achieve beneficial information on the structural health condition. In this regard, data mining which is one of the emerging computer-based technologies, can be employed for extraction of valuable information from obtained sensor databases. On the other hand, data inverse analysis scheme as a problem-based procedure has been developing rapidly. Therefore, the aforesaid scheme and data mining should be combined in order to satisfy increasing demand of data analysis, especially in complex systems such as bridges. Consequently, this study develops a damage detection methodology based on these strategies. To this end, an inverse analysis approach using data mining is applied for a composite bridge. To aid the aim, the support vector machine (SVM) algorithm is utilized to generate the patterns by means of vibration characteristics dataset. To compare the robustness and accuracy of the predicted outputs, four kernel functions, including linear, polynomial, sigmoid, and radial basis function (RBF) are applied to build the patterns. The results point out the feasibility of the proposed method for detecting damage in composite slab-on-girder bridges.
Document and Text Processing
Mina Tabatabaei; Hossein Rahmani; Motahareh Nasiri
Abstract
The search for effective treatments for complex diseases, while minimizing toxicity and side effects, has become crucial. However, identifying synergistic combinations of drugs is often a time-consuming and expensive process, relying on trial and error due to the vast search space involved. Addressing ...
Read More
The search for effective treatments for complex diseases, while minimizing toxicity and side effects, has become crucial. However, identifying synergistic combinations of drugs is often a time-consuming and expensive process, relying on trial and error due to the vast search space involved. Addressing this issue, we present a deep learning framework in this study. Our framework utilizes a diverse set of features, including chemical structure, biomedical literature embedding, and biological network interaction data, to predict potential synergistic combinations. Additionally, we employ autoencoders and principal component analysis (PCA) for dimension reduction in sparse data. Through 10-fold cross-validation, we achieved an impressive 98 percent area under the curve (AUC), surpassing the performance of seven previous state-of-the-art approaches by an average of 8%.