L. Falahiazar; V. Seydi; M. Mirzarezaee
Abstract
Many of the real-world issues have multiple conflicting objectives that the optimization between contradictory objectives is very difficult. In recent years, the Multi-objective Evolutionary Algorithms (MOEAs) have shown great performance to optimize such problems. So, the development of MOEAs will always ...
Read More
Many of the real-world issues have multiple conflicting objectives that the optimization between contradictory objectives is very difficult. In recent years, the Multi-objective Evolutionary Algorithms (MOEAs) have shown great performance to optimize such problems. So, the development of MOEAs will always lead to the advancement of science. The Non-dominated Sorting Genetic Algorithm II (NSGAII) is considered as one of the most used evolutionary algorithms, and many MOEAs have emerged to resolve NSGAII problems, such as the Sequential Multi-Objective Algorithm (SEQ-MOGA). SEQ-MOGA presents a new survival selection that arranges individuals systematically, and the chromosomes can cover the entire Pareto Front region. In this study, the Archive Sequential Multi-Objective Algorithm (ASMOGA) is proposed to develop and improve SEQ-MOGA. ASMOGA uses the archive technique to save the history of the search procedure, so that the maintenance of the diversity in the decision space is satisfied adequately. To demonstrate the performance of ASMOGA, it is used and compared with several state-of-the-art MOEAs for optimizing benchmark functions and designing the I-Beam problem. The optimization results are evaluated by Performance Metrics such as hypervolume, Generational Distance, Spacing, and the t-test (a statistical test); based on the results, the superiority of the proposed algorithm is identified clearly.
J.10.5. Industrial
Arezoo Zamany; Abbas Khamseh; Sayedjavad Iranbanfard
Abstract
The international transfer of high technologies plays a pivotal role in the transformation of industries and the transition to Industry 5.0 - a paradigm emphasizing human-centric, sustainable, and resilient industrial development. However, this process faces numerous challenges and complexities, necessitating ...
Read More
The international transfer of high technologies plays a pivotal role in the transformation of industries and the transition to Industry 5.0 - a paradigm emphasizing human-centric, sustainable, and resilient industrial development. However, this process faces numerous challenges and complexities, necessitating a profound understanding of its key variables and concepts. The present research aimed to identify and analyze these variables in the realm of high technology transfer in Industry 5.0. Following a systematic literature review protocol, 84 relevant articles published between 2017 and 2024 were selected based on predefined criteria including relevance to the research topic, publication quality, and citation impact. These articles were analyzed using a comprehensive text mining approach incorporating keyword extraction, sentiment analysis, topic modeling, and concept clustering techniques implemented through Python libraries including NLTK, SpaCy, TextBlob, and Scikit-learn. The results categorize the key variables and concepts into five main clusters: high technologies (including AI, IoT, and robotics), technology transfer mechanisms, Industry 5.0 characteristics, implementation challenges (such as cybersecurity risks and high adoption costs) and opportunities (including increased productivity and innovation potential), and regulatory frameworks. These findings unveil various aspects of the technology transfer process, providing insights for stakeholders while highlighting the critical role of human-technology collaboration in Industry 5.0. The study's limitations include potential bias from focusing primarily on English-language literature and the inherent constraints of computational text analysis in capturing context-dependent nuances. This research contributes to a deeper understanding of technology transfer dynamics in Industry 5.0, offering practical implications for policymaking and implementation strategies.
H.3.8. Natural Language Processing
P. Kavehzadeh; M. M. Abdollah Pour; S. Momtazi
Abstract
Over the last few years, text chunking has taken a significant part in sequence labeling tasks. Although a large variety of methods have been proposed for shallow parsing in English, most proposed approaches for text chunking in Persian language are based on simple and traditional concepts. In this paper, ...
Read More
Over the last few years, text chunking has taken a significant part in sequence labeling tasks. Although a large variety of methods have been proposed for shallow parsing in English, most proposed approaches for text chunking in Persian language are based on simple and traditional concepts. In this paper, we propose using the state-of-the-art transformer-based contextualized models, namely BERT and XLM-RoBERTa, as the major structure of our models. Conditional Random Field (CRF), the combination of Bidirectional Long Short-Term Memory (BiLSTM) and CRF, and a simple dense layer are employed after the transformer-based models to enhance the model's performance in predicting chunk labels. Moreover, we provide a new dataset for noun phrase chunking in Persian which includes annotated data of Persian news text. Our experiments reveal that XLM-RoBERTa achieves the best performance between all the architectures tried on the proposed dataset. The results also show that using a single CRF layer would yield better results than a dense layer and even the combination of BiLSTM and CRF.
H.3. Artificial Intelligence
Amirhossein Khabbaz; Mansoor Fateh; Ali Pouyan; Mohsen Rezvani
Abstract
Autism spectrum disorder (ASD) is a collection of inconstant characteristics. Anomalies in reciprocal social communications and disabilities in perceiving communication patterns characterize These features. Also, exclusive repeated interests and actions identify ASD. Computer games have affirmative effects ...
Read More
Autism spectrum disorder (ASD) is a collection of inconstant characteristics. Anomalies in reciprocal social communications and disabilities in perceiving communication patterns characterize These features. Also, exclusive repeated interests and actions identify ASD. Computer games have affirmative effects on autistic children. Serious games have been widely used to elevate the ability to communicate with other individuals in these children. In this paper, we propose an adaptive serious game to rate the social skills of autistic children. The proposed serious game employs a reinforcement learning mechanism to learn such ratings adaptively for the players. It uses fuzzy logic to estimate the communication skills of autistic children. The game adapts itself to the level of the child with autism. For that matter, it uses an intelligent agent to tune the challenges through playtime. To dynamically evaluate the communication skills of these children, the game challenges may grow harder based on the development of a child's skills through playtime. We also employ fuzzy logic to estimate the playing abilities of the player periodically. Fifteen autistic children participated in experiments to evaluate the presented serious game. The experimental results show that the proposed method is effective in the communication skill of autistic children.
N. Nowrozian; F. Tashtarian
Abstract
Battery power limitation of sensor nodes (SNs) is a major challenge for wireless sensor networks (WSNs) which affects network survival. Thus, optimizing the energy consumption of the SNs as well as increasing the lifetime of the SNs and thus, extending the lifetime of WSNs are of crucial importance in ...
Read More
Battery power limitation of sensor nodes (SNs) is a major challenge for wireless sensor networks (WSNs) which affects network survival. Thus, optimizing the energy consumption of the SNs as well as increasing the lifetime of the SNs and thus, extending the lifetime of WSNs are of crucial importance in these types of networks. Mobile chargers (MCs) and wireless power transfer (WPT) technologies have played an important long role in WSNs, and much research has been done on how to use the MC to enhance the performance of WSNs in recent decades. In this paper, we first review the application of MCs and WPT technologies in WSNs. Then, forwarding issues the MC has been considered in the role of power transmitter in WSNs and the existing approaches are categorized, with the purposes and limitations of MC dispatching studied. Then an overview of the existing articles is presented and to better understand the contents, tables and figures are offered that summarize the existing methods. We examine them in different dimensions such as advantages and disadvantages etc. Finally, the future prospects of MC are discussed.
S. Ghandibidgoli; H. Mokhtari
Abstract
In many applications of the robotics, the mobile robot should be guided from a source to a specific destination. The automatic control and guidance of a mobile robot is a challenge in the context of robotics. So, in current paper, this problem is studied using various machine learning methods. Controlling ...
Read More
In many applications of the robotics, the mobile robot should be guided from a source to a specific destination. The automatic control and guidance of a mobile robot is a challenge in the context of robotics. So, in current paper, this problem is studied using various machine learning methods. Controlling a mobile robot is to help it to make the right decision about changing direction according to the information read by the sensors mounted around waist of the robot. Machine learning methods are trained using 3 large datasets read by the sensors and obtained from machine learning database of UCI. The employed methods include (i) discriminators: greedy hypercube classifier and support vector machines, (ii) parametric approaches: Naive Bayes’ classifier with and without dimensionality reduction methods, (iii) semiparametric algorithms: Expectation-Maximization algorithm (EM), C-means, K-means, agglomerative clustering, (iv) nonparametric approaches for defining the density function: histogram and kernel estimators, (v) nonparametric approaches for learning: k-nearest neighbors and decision tree and (vi) Combining Multiple Learners: Boosting and Bagging. These methods are compared based on various metrics. Computational results indicate superior performance of the implemented methods compared to the previous methods using the mentioned dataset. In general, Boosting, Bagging, Unpruned Tree and Pruned Tree (θ = 10-7) have given better results compared to the existing results. Also the efficiency of the implemented decision tree is better than the other employed methods and this method improves the classification precision, TP-rate, FP- rate and MSE of the classes by 0.1%, 0.1%, 0.001% and 0.001%.
H.3.7. Learning
Laleh Armi; Elham Abbasi
Abstract
In this paper, we propose an innovative classification method for tree bark classification and tree species identification. The proposed method consists of two steps. In the first step, we take the advantages of ILQP, a rotationally invariant, noise-resistant, and fully descriptive color texture feature ...
Read More
In this paper, we propose an innovative classification method for tree bark classification and tree species identification. The proposed method consists of two steps. In the first step, we take the advantages of ILQP, a rotationally invariant, noise-resistant, and fully descriptive color texture feature extraction method. Then, in the second step, a new classification method called stacked mixture of ELM-based experts with a trainable gating network (stacked MEETG) is proposed. The proposed method is evaluated using the Trunk12, BarkTex, and AFF datasets. The performance of the proposed method on these three bark datasets shows that our approach provides better accuracy than other state-of-the-art methods.Our proposed method achieves an average classification accuracy of 92.79% (Trunk12), 92.54% (BarkTex), and 91.68% (AFF), respectively. Additionally, the results demonstrate that ILQP has better texture feature extraction capabilities than similar methods such as ILTP. Furthermore, stacked MEETG has shown a great influence on the classification accuracy.
F.4.18. Time series analysis
Fatemeh Moodi; Amir Jahangard Rafsanjani; Sajjad Zarifzadeh; Mohammad Ali Zare Chahooki
Abstract
This article proposes a novel hybrid network integrating three distinct architectures -CNN, GRU, and LSTM- to predict stock price movements. Here with Combining Feature Extraction and Sequence Learning and Complementary Strengths can Improved Predictive Performance. CNNs can effectively identify short-term ...
Read More
This article proposes a novel hybrid network integrating three distinct architectures -CNN, GRU, and LSTM- to predict stock price movements. Here with Combining Feature Extraction and Sequence Learning and Complementary Strengths can Improved Predictive Performance. CNNs can effectively identify short-term dependencies and relevant features in time series, such as trends or spikes in stock prices. GRUs designed to handle sequential data. They are particularly useful for capturing dependencies over time while being computationally less expensive than LSTMs. In the hybrid model, GRUs help maintain relevant historical information in the sequence without suffering from vanishing gradient problems, making them more efficient for long sequences. LSTMs excel at learning long-term dependencies in sequential data, thanks to their memory cell structure. By retaining information over longer periods, LSTMs in the hybrid model ensure that important trends over time are not lost, providing a deeper understanding of the time series data. The novelty of the 1D-CNN-GRU-LSTM hybrid model lies in its ability to simultaneously capture short-term patterns and long-term dependencies in time series data, offering a more nuanced and accurate prediction of stock prices. The data set comprises technical indicators, sentiment analysis, and various aspects derived from pertinent tweets. Stock price movement is categorized into three categories: Rise, Fall, and Stable. Evaluation of this model on five years of transaction data demonstrates its capability to forecast stock price movements with an accuracy of 0.93717. The improvement of proposed hybrid model for stock movement prediction over existing models is 12% for accuracy and F1-score metrics.
N. Esfandian; F. Jahani bahnamiri; S. Mavaddati
Abstract
This paper proposes a novel method for voice activity detection based on clustering in spectro-temporal domain. In the proposed algorithms, auditory model is used to extract the spectro-temporal features. Gaussian Mixture Model and WK-means clustering methods are used to decrease dimensions of the spectro-temporal ...
Read More
This paper proposes a novel method for voice activity detection based on clustering in spectro-temporal domain. In the proposed algorithms, auditory model is used to extract the spectro-temporal features. Gaussian Mixture Model and WK-means clustering methods are used to decrease dimensions of the spectro-temporal space. Moreover, the energy and positions of clusters are used for voice activity detection. Silence/speech is recognized using the attributes of clusters and the updated threshold value in each frame. Having higher energy, the first cluster is used as the main speech section in computation. The efficiency of the proposed method was evaluated for silence/speech discrimination in different noisy conditions. Displacement of clusters in spectro-temporal domain was considered as the criteria to determine robustness of features. According to the results, the proposed method improved the speech/non-speech segmentation rate in comparison to temporal and spectral features in low signal to noise ratios (SNRs).
F. Rismanian Yazdi; M. Hosseinzadeh; S. Jabbehdari
Abstract
Wireless body area networks (WBAN) are innovative technologies that have been the anticipation greatly promote healthcare monitoring systems. All WBAN included biomedical sensors that can be worn on or implanted in the body. Sensors are monitoring vital signs and then processing the data and transmitting ...
Read More
Wireless body area networks (WBAN) are innovative technologies that have been the anticipation greatly promote healthcare monitoring systems. All WBAN included biomedical sensors that can be worn on or implanted in the body. Sensors are monitoring vital signs and then processing the data and transmitting to the central server. Biomedical sensors are limited in energy resources and need an improved design for managing energy consumption. Therefore, DTEC-MAC (Diverse Traffic with Energy Consumption-MAC) is proposed based on the priority of data classification in the cluster nodes and provides medical data based on energy management. The proposed method uses fuzzy logic based on the distance to sink and the remaining energy and length of data to select the cluster head. MATLAB software was used to simulate the method. This method compared with similar methods called iM-SIMPLE and M-ATTEMPT, ERP. Results of the simulations indicate that it works better to extend the lifetime and guarantee minimum energy and packet delivery rates, maximizing the throughput.
H.3. Artificial Intelligence
Hassan Haji Mohammadi; Alireza Talebpour; Ahamd Mahmoudi Aznaveh; Samaneh Yazdani
Abstract
Coreference resolution is one of the essential tasks of natural languageprocessing. This task identifies all in-text expressions that refer to thesame entity in the real world. Coreference resolution is used in otherfields of natural language processing, such as information extraction,machine translation, ...
Read More
Coreference resolution is one of the essential tasks of natural languageprocessing. This task identifies all in-text expressions that refer to thesame entity in the real world. Coreference resolution is used in otherfields of natural language processing, such as information extraction,machine translation, and question-answering.This article presents a new coreference resolution corpus in Persiannamed Mehr corpus. The article's primary goal is to develop a Persiancoreference corpus that resolves some of the previous Persian corpus'sshortcomings while maintaining a high inter-annotator agreement. Thiscorpus annotates coreference relations for noun phrases, namedentities, pronouns, and nested named entities. Two baseline pronounresolution systems are developed, and the results are reported. Thecorpus size includes 400 documents and about 170k tokens. Corpusannotation is done by WebAnno preprocessing tool.
H.3. Artificial Intelligence
Fariba Taghinezhad; Mohammad Ghasemzadeh
Abstract
Artificial neural networks are among the most significant models in machine learning that use numeric inputs. This study presents a new single-layer perceptron model based on categorical inputs. In the proposed model, every quality value in the training dataset receives a trainable weight. Input data ...
Read More
Artificial neural networks are among the most significant models in machine learning that use numeric inputs. This study presents a new single-layer perceptron model based on categorical inputs. In the proposed model, every quality value in the training dataset receives a trainable weight. Input data is classified by determining the weight vector that corresponds to the categorical values in it. To evaluate the performance of the proposed algorithm, we have used 10 datasets. We have compared the performance of the proposed method to that of other machine learning models, including neural networks, support vector machines, naïve Bayes classifiers, and random forests. According to the results, the proposed model resulted in a 36% reduction in memory usage when compared to baseline models across all datasets. Moreover, it demonstrated a training speed enhancement of 54.5% for datasets that contained more than 1000 samples. The accuracy of the proposed model is also comparable to other machine learning models.
Z. Imanimehr
Abstract
Peer-to-peer video streaming has reached great attention during recent years. Video streaming in peer-to-peer networks is a good way to stream video on the Internet due to the high scalability, high video quality, and low bandwidth requirements. In this paper the issue of live video streaming in peer-to-peer ...
Read More
Peer-to-peer video streaming has reached great attention during recent years. Video streaming in peer-to-peer networks is a good way to stream video on the Internet due to the high scalability, high video quality, and low bandwidth requirements. In this paper the issue of live video streaming in peer-to-peer networks which contain selfish peers is addressed. To encourage peers to cooperate in video distribution, tokens are used as an internal currency. Tokens are gained by peers when they accept requests from other peers to upload video chunks to them, and tokens are spent when sending requests to other peers to download video chunks from them. To handle the heterogeneity in the bandwidth of peers, the assumption has been made that the video is coded as multi-layered. For each layer the same token has been used, but priced differently per layer. Based on the available token pools, peers can request various qualities. A new token-based incentive mechanism has been proposed, which adapts the admission control policy of peers according to the dynamics of the request submission, request arrival, time to send requests, and bandwidth availability processes. Peer-to-peer requests could arrive at any time, so the continuous Markov Decision Process has been used.
M. Gordan; Saeed R. Sabbagh-Yazdi; Z. Ismail; Kh. Ghaedi; H. Hamad Ghayeb
Abstract
A structural health monitoring system contains two components, i.e. a data collection approach comprising a network of sensors for recording the structural responses as well as an extraction methodology in order to achieve beneficial information on the structural health condition. In this regard, data ...
Read More
A structural health monitoring system contains two components, i.e. a data collection approach comprising a network of sensors for recording the structural responses as well as an extraction methodology in order to achieve beneficial information on the structural health condition. In this regard, data mining which is one of the emerging computer-based technologies, can be employed for extraction of valuable information from obtained sensor databases. On the other hand, data inverse analysis scheme as a problem-based procedure has been developing rapidly. Therefore, the aforesaid scheme and data mining should be combined in order to satisfy increasing demand of data analysis, especially in complex systems such as bridges. Consequently, this study develops a damage detection methodology based on these strategies. To this end, an inverse analysis approach using data mining is applied for a composite bridge. To aid the aim, the support vector machine (SVM) algorithm is utilized to generate the patterns by means of vibration characteristics dataset. To compare the robustness and accuracy of the predicted outputs, four kernel functions, including linear, polynomial, sigmoid, and radial basis function (RBF) are applied to build the patterns. The results point out the feasibility of the proposed method for detecting damage in composite slab-on-girder bridges.
Document and Text Processing
Mina Tabatabaei; Hossein Rahmani; Motahareh Nasiri
Abstract
The search for effective treatments for complex diseases, while minimizing toxicity and side effects, has become crucial. However, identifying synergistic combinations of drugs is often a time-consuming and expensive process, relying on trial and error due to the vast search space involved. Addressing ...
Read More
The search for effective treatments for complex diseases, while minimizing toxicity and side effects, has become crucial. However, identifying synergistic combinations of drugs is often a time-consuming and expensive process, relying on trial and error due to the vast search space involved. Addressing this issue, we present a deep learning framework in this study. Our framework utilizes a diverse set of features, including chemical structure, biomedical literature embedding, and biological network interaction data, to predict potential synergistic combinations. Additionally, we employ autoencoders and principal component analysis (PCA) for dimension reduction in sparse data. Through 10-fold cross-validation, we achieved an impressive 98 percent area under the curve (AUC), surpassing the performance of seven previous state-of-the-art approaches by an average of 8%.
N. Shayanfar; V. Derhami; M. Rezaeian
Abstract
In video prediction it is expected to predict next frame of video by providing a sequence of input frames. Whereas numerous studies exist that tackle frame prediction, suitable performance is not still achieved and therefore the application is an open problem. In this article multiscale processing is ...
Read More
In video prediction it is expected to predict next frame of video by providing a sequence of input frames. Whereas numerous studies exist that tackle frame prediction, suitable performance is not still achieved and therefore the application is an open problem. In this article multiscale processing is studied for video prediction and a new network architecture for multiscale processing is presented. This architecture is in the broad family of autoencoders. It is comprised of an encoder and decoder. A pretrained VGG is used as an encoder that processes a pyramid of input frames at multiple scales simultaneously. The decoder is based on 3D convolutional neurons. The presented architecture is studied by using three different datasets with varying degree of difficulty. In addition, the proposed approach is compared to two conventional autoencoders. It is observed that by using the pretrained network and multiscale processing results in a performant approach.
H.3. Artificial Intelligence
Seyed Alireza Bashiri Mosavi; Omid Khalaf Beigi; Arash Mahjoubifard
Abstract
Using intelligent approaches in diagnosing the COVID-19 disease based on machine learning algorithms (MLAs), as a joint work, has attracted the attention of pattern recognition and medicine experts. Before applying MLAs to the data extracted from infectious diseases, techniques such as RAT and RT-qPCR ...
Read More
Using intelligent approaches in diagnosing the COVID-19 disease based on machine learning algorithms (MLAs), as a joint work, has attracted the attention of pattern recognition and medicine experts. Before applying MLAs to the data extracted from infectious diseases, techniques such as RAT and RT-qPCR were used by data mining engineers to diagnose the contagious disease, whose weaknesses include the lack of test kits, the placement of the specialist and the patient pointed at a place and low accuracy. This study introduces a three-stage learning framework including a feature extractor by visual geometry group 16 (VGG16) model to solve the problems caused by the lack of samples, a three-channel convolution layer, and a classifier based on a three-layer neural network. The results showed that the Covid VGG16 (CoVGG16) has an accuracy of 96.37% and 100%, precision of 96.52% and 100%, and recall of 96.30% and 100% for COVID-19 prediction on the test sets of the two datasets (one type of CT-scan-based images and one type of X-ray-oriented ones gathered from Kaggle repositories).
F. Kaveh-Yazdy; S. Zarifzadeh
Abstract
Due to their structure and usage condition, water meters face degradation, breaking, freezing, and leakage problems. There are various studies intended to determine the appropriate time to replace degraded ones. Earlier studies have used several features, such as user meteorological parameters, usage ...
Read More
Due to their structure and usage condition, water meters face degradation, breaking, freezing, and leakage problems. There are various studies intended to determine the appropriate time to replace degraded ones. Earlier studies have used several features, such as user meteorological parameters, usage conditions, water network pressure, and structure of meters to detect failed water meters. This article proposes a recommendation framework that uses registered water consumption values as input data and provides meter replacement recommendations. This framework takes time series of registered consumption values and preprocesses them in two rounds to extract effective features. Then, multiple un-/semi-supervised outlier detection methods are applied to the processed data and assigns outlier/normal labels to them. At the final stage, a hypergraph-based ensemble method receives the labels and combines them to discover the suitable label. Due to the unavailability of ground truth labeled data for meter replacement, we compare our method with respect to its FPR and two internal metrics: Dunn index and Davies-Bouldin Index. Results of our comparative experiments show that the proposed framework detects more compact clusters with smaller variance.
H.5. Image Processing and Computer Vision
Fatemeh Zare mehrjardi; Alimohammad Latif; Mohsen Sardari Zarchi
Abstract
Image is a powerful communication tool that is widely used in various applications, such as forensic medicine and court, where the validity of the image is crucial. However, with the development and availability of image editing tools, image manipulation can be easily performed for a specific purpose. ...
Read More
Image is a powerful communication tool that is widely used in various applications, such as forensic medicine and court, where the validity of the image is crucial. However, with the development and availability of image editing tools, image manipulation can be easily performed for a specific purpose. Copy-move forgery is one of the simplest and most common methods of image manipulation. There are two traditional methods to detect this type of forgery: block-based and key point-based. In this study, we present a hybrid approach of block-based and key point-based methods using meta-heuristic algorithms to find the optimal configuration. For this purpose, we first search for pair blocks suspected of forgery using the genetic algorithm with the maximum number of matched key points as the fitness function. Then, we find the accurate forgery blocks using simulating annealing algorithm and producing neighboring solutions around suspicious blocks. We evaluate the proposed method on CoMoFod and COVERAGE datasets, and obtain the results of accuracy, precision, recall and IoU with values of 96.87, 92.15, 95.34 and 93.45 respectively. The evaluation results show the satisfactory performance of the proposed method.
V. Ghasemi; A. Ghanbari Sorkhi
Abstract
Deploying m-connected k-covering (MK) wireless sensor networks (WSNs) is crucial for reliable packet delivery and target coverage. This paper proposes implementing random MK WSNs based on expected m-connected k-covering (EMK) WSNs. We define EMK WSNs as random WSNs mathematically expected to be both ...
Read More
Deploying m-connected k-covering (MK) wireless sensor networks (WSNs) is crucial for reliable packet delivery and target coverage. This paper proposes implementing random MK WSNs based on expected m-connected k-covering (EMK) WSNs. We define EMK WSNs as random WSNs mathematically expected to be both m-connected and k-covering. Deploying random EMK WSNs is conducted by deriving a relationship between m-connectivity and k-coverage, together with a lower bound for the required number of nodes. It is shown that EMK WSNs tend to be MK asymptotically. A polynomial worst-case and linear average-case complexity algorithm is presented to turn an EMK WSN into MK in non-asymptotic conditions. The m-connectivity is founded on the concept of support sets to strictly guarantee the existence of m disjoint paths between every node and the sink. The theoretical results are assessed via experiments, and several metaheuristic solutions have been benchmarked to reveal the appropriate size of the generated MK WSNs.
I.4. Life and Medical Sciences
Nasrin Aghaee-Maybodi; Amin Nezarat; Sima Emadi; Mohammad Reza Ghaffari
Abstract
Sequence alignment and genome mapping pose significant challenges, primarily focusing on speed and storage space requirements for mapped sequences. With the ever-increasing volume of DNA sequence data, it becomes imperative to develop efficient alignment methods that not only reduce storage demands but ...
Read More
Sequence alignment and genome mapping pose significant challenges, primarily focusing on speed and storage space requirements for mapped sequences. With the ever-increasing volume of DNA sequence data, it becomes imperative to develop efficient alignment methods that not only reduce storage demands but also offer rapid alignment. This study introduces the Parallel Sequence Alignment with a Hash-Based Model (PSALR) algorithm, specifically designed to enhance alignment speed and optimize storage space while maintaining utmost accuracy. In contrast to other algorithms like BLAST, PSALR efficiently indexes data using a hash table, resulting in reduced computational load and processing time. This algorithm utilizes data compression and packetization with conventional bandwidth sizes, distributing data among different nodes to reduce memory and transfer time. Upon receiving compressed data, nodes can seamlessly perform searching and mapping, eliminating the need for unpacking and decoding at the destination. As an additional innovation, PSALR not only divides sequences among processors but also breaks down large sequences into sub-sequences, forwarding them to nodes. This approach eliminates any restrictions on query length sent to nodes, and evaluation results are returned directly to the user without central node involvement. Another notable feature of PSALR is its utilization of overlapping sub-sequences within both query and reference sequences. This ensures that the search and mapping process includes all possible sub-sequences of the target sequence, rather than being limited to a subset. Performance tests indicate that the PSALR algorithm outperforms its counterparts, positioning it as a promising solution for efficient sequence alignment and genome mapping.
S. Bayatpour; Seyed M. H. Hasheminejad
Abstract
Most of the methods proposed for segmenting image objects are supervised methods which are costly due to their need for large amounts of labeled data. However, in this article, we have presented a method for segmenting objects based on a meta-heuristic optimization which does not need any training data. ...
Read More
Most of the methods proposed for segmenting image objects are supervised methods which are costly due to their need for large amounts of labeled data. However, in this article, we have presented a method for segmenting objects based on a meta-heuristic optimization which does not need any training data. This procedure consists of two main stages of edge detection and texture analysis. In the edge detection stage, we have utilized invasive weed optimization (IWO) and local thresholding. Edge detection methods that are based on local histograms are efficient methods, but it is very difficult to determine the desired parameters manually. In addition, these parameters must be selected specifically for each image. In this paper, a method is presented for automatic determination of these parameters using an evolutionary algorithm. Evaluation of this method demonstrates its high performance on natural images.
F.2.7. Optimization
Mahsa Dehbozorgi; Pirooz Shamsinejadbabaki; Elmira Ashoormahani
Abstract
Clustering is one of the most effective techniques for reducing energy consumption in wireless sensor networks. But selecting optimum cluster heads (CH) as relay nodes has remained as a very challenging task in clustering. All current state of the art methods in this era only focus on the individual ...
Read More
Clustering is one of the most effective techniques for reducing energy consumption in wireless sensor networks. But selecting optimum cluster heads (CH) as relay nodes has remained as a very challenging task in clustering. All current state of the art methods in this era only focus on the individual characteristics of nodes like energy level and distance to the Base Station (BS). But when a CH dies it is necessary to find another CH for cluster and usually its neighbor will be selected. Despite existing methods, in this paper we proposed a method that considers node neighborhood fitness as a selection factor in addition to other typical factors. A Particle Swarm Optimization algorithm has been designed to find best CHs based on intra-cluster distance, distance of CHs to the BS, residual energy and neighborhood fitness. The proposed method compared with LEACH and PSO-ECHS algorithms and experimental results have shown that our proposed method succeeded to postpone death of first node by 5.79%, death of 30% of nodes by 25.50% and death of 70% of nodes by 58.67% compared to PSO-ECHS algorithm
H.3.11. Vision and Scene Understanding
S. Bayatpour; M. Sharghi
Abstract
Digital images are being produced in a massive number every day. Acomponent that may exist in digital images is text. Textual information can beextracted and used in a variety of fields. Noise, blur, distortions, occlusion, fontvariation, alignments, and orientation, are among the main challenges for ...
Read More
Digital images are being produced in a massive number every day. Acomponent that may exist in digital images is text. Textual information can beextracted and used in a variety of fields. Noise, blur, distortions, occlusion, fontvariation, alignments, and orientation, are among the main challenges for textdetection in natural images. Despite many advances in text detection algorithms,there is not yet a single algorithm that addresses all of the above problemssuccessfully. Furthermore, most of the proposed algorithms can only detecthorizontal texts and a very small fraction of them consider Farsi language. Inthis paper, a method is proposed for detecting multi-orientated texts in both Farsiand English languages. We have defined seven geometric features to distinguishtext components from the background and proposed a new contrast enhancementmethod for text detection algorithms. Our experimental results indicate that theproposed method achieves a high performance in text detection on natural images.
N. Majidi; K. Kiani; R. Rastgoo
Abstract
This study presents a method to reconstruct a high-resolution image using a deep convolution neural network. We propose a deep model, entitled Deep Block Super Resolution (DBSR), by fusing the output features of a deep convolutional network and a shallow convolutional network. In this way, our model ...
Read More
This study presents a method to reconstruct a high-resolution image using a deep convolution neural network. We propose a deep model, entitled Deep Block Super Resolution (DBSR), by fusing the output features of a deep convolutional network and a shallow convolutional network. In this way, our model benefits from high frequency and low frequency features extracted from deep and shallow networks simultaneously. We use the residual layers in our model to make repetitive layers, increase the depth of the model, and make an end-to-end model. Furthermore, we employed a deep network in up-sampling step instead of bicubic interpolation method used in most of the previous works. Since the image resolution plays an important role to obtain rich information from the medical images and helps for accurate and faster diagnosis of the ailment, we use the medical images for resolution enhancement. Our model is capable of reconstructing a high-resolution image from low-resolution one in both medical and general images. Evaluation results on TSA and TZDE datasets, including MRI images, and Set5, Set14, B100, and Urban100 datasets, including general images, demonstrate that our model outperforms state-of-the-art alternatives in both areas of medical and general super-resolution enhancement from a single input image.