D. Data
Zahra Ghorbani; Ali Ghorbanian
Abstract
Increasing the accuracy of time-series clustering while reducing execution time is a primary challenge in the field of time-series clustering. Researchers have recently applied approaches, such as the development of distance metrics and dimensionality reduction, to address this challenge. However, using ...
Read More
Increasing the accuracy of time-series clustering while reducing execution time is a primary challenge in the field of time-series clustering. Researchers have recently applied approaches, such as the development of distance metrics and dimensionality reduction, to address this challenge. However, using segmentation and ensemble clustering to solve this issue is a key aspect that has received less attention in previous research. In this study, an algorithm based on the selection and combination of the best segments created from a time-series dataset was developed. In the first step, the dataset was divided into segments of equal lengths. In the second step, each segment is clustered using a hierarchical clustering algorithm. In the third step, a genetic algorithm selects different segments and combines them using combinatorial clustering. The resulting clustering of the selected segments was selected as the final dataset clustering. At this stage, an internal clustering criterion evaluates and sorts the produced solutions. The proposed algorithm was executed on 82 different datasets in 10 repetitions. The results of the algorithm indicated an increase in the clustering efficiency of 3.07%, reaching a value of 67.40. The obtained results were evaluated based on the length of the time series and the type of dataset. In addition, the results were assessed using statistical tests with the six algorithms existing in the literature.
H.3.2.2. Computer vision
Masoumeh Esmaeiili; Kourosh Kiani
Abstract
The classification of emotions using electroencephalography (EEG) signals is inherently challenging due to the intricate nature of brain activity. Overcoming inconsistencies in EEG signals and establishing a universally applicable sentiment analysis model are essential objectives. This study introduces ...
Read More
The classification of emotions using electroencephalography (EEG) signals is inherently challenging due to the intricate nature of brain activity. Overcoming inconsistencies in EEG signals and establishing a universally applicable sentiment analysis model are essential objectives. This study introduces an innovative approach to cross-subject emotion recognition, employing a genetic algorithm (GA) to eliminate non-informative frames. Then, the optimal frames identified by the GA undergo spatial feature extraction using common spatial patterns (CSP) and the logarithm of variance. Subsequently, these features are input into a Transformer network to capture spatial-temporal features, and the emotion classification is executed using a fully connected (FC) layer with a Softmax activation function. Therefore, the innovations of this paper include using a limited number of channels for emotion classification without sacrificing accuracy, selecting optimal signal segments using the GA, and employing the Transformer network for high-accuracy and high-speed classification. The proposed method undergoes evaluation on two publicly accessible datasets, SEED and SEED-V, across two distinct scenarios. Notably, it attains mean accuracy rates of 99.96% and 99.51% in the cross-subject scenario, and 99.93% and 99.43% in the multi-subject scenario for the SEED and SEED-V datasets, respectively. Noteworthy is the outperformance of the proposed method over the state-of-the-art (SOTA) in both scenarios for both datasets, thus underscoring its superior efficacy. Additionally, comparing the accuracy of individual subjects with previous works in cross subject scenario further confirms the superiority of the proposed method for both datasets.
H.5. Image Processing and Computer Vision
Z. Mehrnahad; A.M. Latif; J. Zarepour Ahmadabadi
Abstract
In this paper, a novel scheme for lossless meaningful visual secret sharing using XOR properties is presented. In the first step, genetic algorithm with an appropriate proposed objective function created noisy share images. These images do not contain any information about the input secret image and ...
Read More
In this paper, a novel scheme for lossless meaningful visual secret sharing using XOR properties is presented. In the first step, genetic algorithm with an appropriate proposed objective function created noisy share images. These images do not contain any information about the input secret image and the secret image is fully recovered by stacking them together. Because of attacks on image transmission, a new approach for construction of meaningful shares by the properties of XOR is proposed. In recovery scheme, the input secret image is fully recovered by an efficient XOR operation. The proposed method is evaluated using PSNR, MSE and BCR criteria. The experimental results presents good outcome compared with other methods in both quality of share images and recovered image.
A.H. Damia; M. Esnaashari; M.R. Parvizimosaed
Abstract
In the structural software test, test data generation is essential. The problem of generating test data is a search problem, and for solving the problem, search algorithms can be used. Genetic algorithm is one of the most widely used algorithms in this field. Adjusting genetic algorithm parameters helps ...
Read More
In the structural software test, test data generation is essential. The problem of generating test data is a search problem, and for solving the problem, search algorithms can be used. Genetic algorithm is one of the most widely used algorithms in this field. Adjusting genetic algorithm parameters helps to increase the effectiveness of this algorithm. In this paper, the Adaptive Genetic Algorithm (AGA) is used to maintain the diversity of the population to test data generation based on path coverage criterion, which calculates the rate of recombination and mutation with the similarity between chromosomes and the amount of chromosome fitness during and around each algorithm. Experiments have shown that this method is faster for generating test data than other versions of the genetic algorithm used by others.
E. Feli; R. Hosseini; S. Yazdani
Abstract
In Vitro Fertilization (IVF) is one of the scientifically known methods of infertility treatment. This study aimed at improving the performance of predicting the success of IVF using machine learning and its optimization through evolutionary algorithms. The Multilayer Perceptron Neural Network (MLP) ...
Read More
In Vitro Fertilization (IVF) is one of the scientifically known methods of infertility treatment. This study aimed at improving the performance of predicting the success of IVF using machine learning and its optimization through evolutionary algorithms. The Multilayer Perceptron Neural Network (MLP) were proposed to classify the infertility dataset. The Genetic algorithm was used to improve the performance of the Multilayer Perceptron Neural Network model. The proposed model was applied to a dataset including 594 eggs from 94 patients undergoing IVF, of which 318 were of good quality embryos and 276 were of lower quality embryos. For performance evaluation of the MLP model, an ROC curve analysis was conducted, and 10-fold cross-validation performed. The results revealed that this intelligent model has high efficiency with an accuracy of 96% for Multi-layer Perceptron neural network, which is promising compared to counterparts methods.
N. Alibabaie; A.M. Latif
Abstract
Periodic noise reduction is a fundamental problem in image processing, which severely affects the visual quality and subsequent application of the data. Most of the conventional approaches are only dedicated to either the frequency or spatial domain. In this research, we propose a dual-domain approach ...
Read More
Periodic noise reduction is a fundamental problem in image processing, which severely affects the visual quality and subsequent application of the data. Most of the conventional approaches are only dedicated to either the frequency or spatial domain. In this research, we propose a dual-domain approach by converting the periodic noise reduction task into an image decomposition problem. We introduced a bio-inspired computational model to separate the original image from the noise pattern without having any a priori knowledge about its structure or statistics. Experiments on both synthetic and non-synthetic noisy images have been carried out to validate the effectiveness and efficiency of the proposed algorithm. The simulation results demonstrate the effectiveness of the proposed method both qualitatively and quantitatively.
M. Danesh; S. Danesh
Abstract
This paper presents a new method for regression model prediction in an uncertain environment. In practical engineering problems, in order to develop regression or ANN model for making predictions, the average of set of repeated observed values are introduced to the model as an input variable. Therefore, ...
Read More
This paper presents a new method for regression model prediction in an uncertain environment. In practical engineering problems, in order to develop regression or ANN model for making predictions, the average of set of repeated observed values are introduced to the model as an input variable. Therefore, the estimated response of the process is also the average of a set of output values where the variation around the mean is not determinate. However, to provide unbiased and precise estimations, the predictions are required to be correct on average and the spread of date be specified. To address this issue, we proposed a method based on the fuzzy inference system, and genetic and linear programming algorithms. We consider the crisp inputs and the symmetrical triangular fuzzy output. The proposed algorithm is applied to fit the fuzzy regression model. In addition, we apply a simulation example and a practical example in the field of machining process to assess the performance of the proposed method in dealing with practical problems in which the output variables have the nature of uncertainty and impression. Finally, we compare the performance of the suggested method with other methods. Based on the examples, the proposed method is verified for prediction. The results show that the proposed method reduces the error values to a minimum level and is more accurate than the Linear Programming (LP) and fuzzy weights with linear programming (FWLP) methods.
H. Rahmani; H. Kamali; H. Shah-Hosseini
Abstract
Nowadays, a significant amount of studies are devoted to discovering important nodes in graph data. Social networks as graph data have attracted a lot of attention. There are various purposes for discovering the important nodes in social networks such as finding the leaders in them, i.e. the users who ...
Read More
Nowadays, a significant amount of studies are devoted to discovering important nodes in graph data. Social networks as graph data have attracted a lot of attention. There are various purposes for discovering the important nodes in social networks such as finding the leaders in them, i.e. the users who play an important role in promoting advertising, etc. Different criteria have been proposed in discovering important nodes in graph data. Measuring a node’s importance by a single criterion may be inefficient due to the variety of graph structures. Recently, a combination of criteria has been used in the discovery of important nodes. In this paper, we propose a system for the Discovery of Important Nodes in social networks using Genetic Algorithms (DINGA). In our proposed system, important nodes in social networks are discovered by employing a combination of eight informative criteria and their intelligent weighting. We compare our results with a manually weighted method, that uses random weightings for each criterion, in four real networks. Our method shows an average of 22% improvement in the accuracy of important nodes discovery.
I.3.7. Engineering
F. Nosratian; H. Nematzadeh; H. Motameni
Abstract
World Wide Web is growing at a very fast pace and makes a lot of information available to the public. Search engines used conventional methods to retrieve information on the Web; however, the search results of these engines are still able to be refined and their accuracy is not high enough. One of the ...
Read More
World Wide Web is growing at a very fast pace and makes a lot of information available to the public. Search engines used conventional methods to retrieve information on the Web; however, the search results of these engines are still able to be refined and their accuracy is not high enough. One of the methods for web mining is evolutionary algorithms which search according to the user interests. The proposed method based on genetic algorithm optimizes important relationships among links on web pages and also presented a way for classifying web documents. Likewise, the proposed method also finds the best pages among searched ones by engines. Also, it calculates the quality of pages by web page features independently or dependently. The proposed algorithm is complementary to the search engines. In the proposed methods, after implementation of the genetic algorithm using MATLAB 2013 with crossover rate of 0.7 and mutation rate of 0.05, the best and the most similar pages are presented to the user. The optimal solutions remained fixed in several running of the proposed algorithm.
F.2.7. Optimization
F. Fouladi Mahani; A. Mahanipour; A. Mokhtari
Abstract
Recently, significant interest has been attracted by the potential use of aluminum nanostructures as plasmonic color filters to be great alternatives to the commercial color filters based on dye films or pigments. These color filters offer potential applications in LCDs, LEDs, color printing, CMOS image ...
Read More
Recently, significant interest has been attracted by the potential use of aluminum nanostructures as plasmonic color filters to be great alternatives to the commercial color filters based on dye films or pigments. These color filters offer potential applications in LCDs, LEDs, color printing, CMOS image sensors, and multispectral imaging. However, engineering the optical characteristics of these nanostructures to design a color filter with desired pass-band spectrum and high color purity requires accurate optimization techniques. In this paper, an optimization procedure integrating genetic algorithm with FDTD Solutions has been utilized to design plasmonic color filters, automatically. Our proposed aluminum nanohole arrays have been realized successfully to achieve additive (red, green, and blue) color filters using the automated optimization procedure. Despite all the considerations for fabrication simplicity, the designed filters feature transmission efficacies of 45-50 percent with a FWHM of 40 nm, 50 nm, and 80 nm for the red, green, and blue filters, respectively. The obtained results prove an efficient integration of genetic algorithm and FDTD Solutions revealing the potential application of the proposed method for automated design of similar nanostructures.
H.3.15.3. Evolutionary computing and genetic algorithms
A.M Esmilizaini; A.M Latif; Gh. Barid Loghmani
Abstract
Image zooming is one of the current issues of image processing where maintaining the quality and structure of the zoomed image is important. To zoom an image, it is necessary that the extra pixels be placed in the data of the image. Adding the data to the image must be consistent with the texture in ...
Read More
Image zooming is one of the current issues of image processing where maintaining the quality and structure of the zoomed image is important. To zoom an image, it is necessary that the extra pixels be placed in the data of the image. Adding the data to the image must be consistent with the texture in the image and not to create artificial blocks. In this study, the required pixels are estimated by using radial basis functions and calculating the shape parameter c with genetic algorithm. Then, all the estimated pixels are revised based on the sub-algorithm of edge correction. The proposed method is a non-linear method that preserves the edges and minimizes the blur and block artifacts of the zoomed image. The proposed method is evaluated on several images to calculate the optimum shape parameter of radial basis functions. Numerical results are presented by using PSNR and SSIM fidelity measures on different images and are compared to some other methods. The average PSNR of the original image and image zooming is 33.16 which shows that image zooming by factor 2 is similar to the original image, emphasizing that the proposed method has an efficient performance.
B.3. Communication/Networking and Information Technology
Seyed M. Hosseinirad
Abstract
Due to the resource constraint and dynamic parameters, reducing energy consumption became the most important issues of wireless sensor networks topology design. All proposed hierarchy methods cluster a WSN in different cluster layers in one step of evolutionary algorithm usage with complicated parameters ...
Read More
Due to the resource constraint and dynamic parameters, reducing energy consumption became the most important issues of wireless sensor networks topology design. All proposed hierarchy methods cluster a WSN in different cluster layers in one step of evolutionary algorithm usage with complicated parameters which may lead to reducing efficiency and performance. In fact, in WSNs topology, increasing a layer of cluster is a tradeoff between time complexity and energy efficiency. In this study, regarding the most important WSN’s design parameters, a novel dynamic multilayer hierarchy clustering approach using evolutionary algorithms for densely deployed WSN is proposed. Different evolutionary algorithms such as Genetic Algorithm (GA), Imperialist Competitive Algorithm (ICA) and Particle Swarm Optimization (PSO) are used to find an efficient evolutionary algorithm for implementation of the clustering proposed method. The obtained results demonstrate the PSO performance is more efficient compared with other algorithms to provide max network coverage, efficient cluster formation and network traffic reduction. The simulation results of multilayer WSN clustering design through PSO algorithm show that this novel approach reduces the energy communication significantly and increases lifetime of network up to 2.29 times with providing full network coverage (100%) till 350 rounds (56% of network lifetime) compared with WEEC and LEACH-ICA clsutering.
F.2.7. Optimization
M. Kosari; M. Teshnehlab
Abstract
Although many mathematicians have searched on the fractional calculus since many years ago, but its application in engineering, especially in modeling and control, does not have many antecedents. Since there are much freedom in choosing the order of differentiator and integrator in fractional calculus, ...
Read More
Although many mathematicians have searched on the fractional calculus since many years ago, but its application in engineering, especially in modeling and control, does not have many antecedents. Since there are much freedom in choosing the order of differentiator and integrator in fractional calculus, it is possible to model the physical systems accurately. This paper deals with time-domain identification fractional-order chaotic systems where conventional derivation is replaced by a fractional one with the help of a non-integer derivation. This operator is itself approximated by a N-dimensional system composed of an integrator and a phase-lead filter. A hybrid particle swarm optimization (PSO) and genetic algorithm (GA) method has been applied to estimate the parameters of approximated nonlinear fractional-order chaotic system that modeled by a state-space representation. The feasibility of this approach is demonstrated through identifying the parameters of approximated fractional-order Lorenz chaotic system. The performance of the proposed algorithm is compared with the genetic algorithm (GA) and standard particle swarm optimization (SPSO) in terms of parameter accuracy and cost function. To evaluate the identification accuracy, the time-domain output error is designed as the fitness function for parameter optimization. Simulation results show that the proposed method is more successful than other algorithms for parameter identification of fractional order chaotic systems.
G. Information Technology and Systems
M. Aghazadeh; F. Soleimanian Gharehchopogh
Abstract
The size and complexity of websites have grown significantly during recent years. In line with this growth, the need to maintain most of the resources has been intensified. Content Management Systems (CMSs) are software that was presented in accordance with increased demands of users. With the advent ...
Read More
The size and complexity of websites have grown significantly during recent years. In line with this growth, the need to maintain most of the resources has been intensified. Content Management Systems (CMSs) are software that was presented in accordance with increased demands of users. With the advent of Content Management Systems, factors such as: domains, predesigned module’s development, graphics, optimization and alternative support have become factors that influenced the cost of software and web-based projects. Consecutively, these factors have challenged the previously introduced cost estimation models. This paper provides a hybrid method in order to estimate the cost of websites designed by content management systems. The proposed method uses a combination of genetic algorithm and Multilayer Perceptron (MLP). Results have been evaluated by comparing the number of correctly classified and incorrectly classified data and Kappa coefficient, which represents the correlation coefficient between the sets. According to the obtained results, the Kappa coefficient on testing data set equals to: 0.82 percent for the proposed method, 0.06 percent for genetic algorithm and 0.54 percent for MLP Artificial Neural Network (ANN). Based on these results; it can be said that, the proposed method can be used as a considered method in order to estimate the cost of websites designed by content management systems.
I.3.5. Earth and atmospheric sciences
A. Jalalkamali; N. Jalalkamali
Abstract
The prediction of groundwater quality is very important for the management of water resources and environmental activities. The present study has integrated a number of methods such as Geographic Information Systems (GIS) and Artificial Intelligence (AI) methodologies to predict groundwater quality in ...
Read More
The prediction of groundwater quality is very important for the management of water resources and environmental activities. The present study has integrated a number of methods such as Geographic Information Systems (GIS) and Artificial Intelligence (AI) methodologies to predict groundwater quality in Kerman plain (including HCO3-, concentrations and Electrical Conductivity (EC) of groundwater). This research has investigated the abilities of Adaptive Neuro Fuzzy Inference System (ANFIS), the hybrid of ANFIS with Genetic Algorithm (GA), and Artificial Neural Network (ANN) techniques as well to predict the groundwater quality. Various combinations of monthly variability, namely rainfall and groundwater levels in the wells were used by two different neuro-fuzzy models (standard ANFIS and ANFIS-GA) and ANN. The results show that the ANFIS-GA method can present a more parsimonious model with a less number of employed rules (about 300% reduction in number of rules) compared to ANFIS model and improve the fitness criteria and so model efficiency at the same time (38.4% in R2 and 44% in MAPE). The study also reveals that groundwater level fluctuations and rainfall contribute as two important factors in predicting indices of groundwater quality.
H.3.2.5. Environment
M. T. Sattari; M. Pal; R. Mirabbasi; J. Abraham
Abstract
This work reports the results of four ensemble approaches with the M5 model tree as the base regression model to anticipate Sodium Adsorption Ratio (SAR). Ensemble methods that combine the output of multiple regression models have been found to be more accurate than any of the individual models making ...
Read More
This work reports the results of four ensemble approaches with the M5 model tree as the base regression model to anticipate Sodium Adsorption Ratio (SAR). Ensemble methods that combine the output of multiple regression models have been found to be more accurate than any of the individual models making up the ensemble. In this study additive boosting, bagging, rotation forest and random subspace methods are used. The dataset, which consisted of 488 samples with nine input parameters were obtained from the Barandoozchay River in West Azerbaijan province, Iran. Three evaluation criteria: correlation coefficient, root mean square error and mean absolute error were used to judge the accuracy of different ensemble models. In addition to the use of M5 model tree to predict the SAR values, a wrapper-based variable selection approach using a M5 model tree as the learning algorithm and a genetic algorithm, was also used to select useful input variables. The encouraging performance motivates the use of this technique to predict SAR values.
A.1. General
H. Kiani Rad; Z. Moravej
Abstract
In recent years, significant research efforts have been devoted to the optimal planning of power systems. Substation Expansion Planning (SEP) as a sub-system of power system planning consists of finding the most economical solution with the optimal location and size of future substations and/or feeders ...
Read More
In recent years, significant research efforts have been devoted to the optimal planning of power systems. Substation Expansion Planning (SEP) as a sub-system of power system planning consists of finding the most economical solution with the optimal location and size of future substations and/or feeders to meet the future load demand. The large number of design variables and combination of discrete and continuous variables make the substation expansion planning a very challenging problem. So far, various methods have been presented to solve such a complicated problem. Since the Bacterial Foraging Optimization Algorithm (BFOA) yield to proper results in power system studies, and it has not been applied to SEP in sub-transmission voltage level problems yet, this paper develops a new BFO-based method to solve the Sub-Transmission Substation Expansion Planning (STSEP) problem. The technique discussed in this paper uses BFOA to simultaneously optimize the sizes and locations of both the existing and new installed substations and feeders by considering reliability constraints. To clarify the capabilities of the presented method, two test systems (a typical network and a real ones) are considered, and the results of applying GA and BFOA on these networks are compared. The simulation results demonstrate that the BFOA has the potential to find more optimal results than the other algorithm under the same conditions. Also, the fast convergence, consideration of real-world networks limitations as problem constraints, and the simplicity in applying it to real networks are the main features of the proposed method.
E.3. Analysis of Algorithms and Problem Complexity
M. Asghari; H. Nematzadeh
Abstract
Suspended particles have deleterious effects on human health and one of the reasons why Tehran is effected is its geographically location of air pollution. One of the most important ways to reduce air pollution is to predict the concentration of pollutants. This paper proposed a hybrid method to predict ...
Read More
Suspended particles have deleterious effects on human health and one of the reasons why Tehran is effected is its geographically location of air pollution. One of the most important ways to reduce air pollution is to predict the concentration of pollutants. This paper proposed a hybrid method to predict the air pollution in Tehran based on particulate matter less than 10 microns (PM10), and the information and data of Aghdasiyeh Weather Quality Control Station and Mehrabad Weather Station from 2007 to 2013. Generally, 11 inputs have been inserted to the model, to predict the daily concentration of PM10. For this purpose, Artificial Neural Network with Back Propagation (BP) with a middle layer and sigmoid activation function and its hybrid with Genetic Algorithm (BP-GA) were used and ultimately the performance of the proposed method was compared with basic Artificial Neural Networks along with (BP) Based on the criteria of - R2-, RMSE and MAE. The finding shows that BP-GA has higher accuracy and performance. In addition, it was also found that the results are more accurate for shorter time periods and this is because the large fluctuation of data in long-term returns negative effect on network performance. Also, unregistered data have negative effect on predictions. Microsoft Excel and Matlab 2013 conducted the simulations.
B.3. Communication/Networking and Information Technology
A. Ghaffari; S. Nobahary
Abstract
Wireless sensor networks (WSNs) consist of a large number of sensor nodes which are capable of sensing different environmental phenomena and sending the collected data to the base station or Sink. Since sensor nodes are made of cheap components and are deployed in remote and uncontrolled environments, ...
Read More
Wireless sensor networks (WSNs) consist of a large number of sensor nodes which are capable of sensing different environmental phenomena and sending the collected data to the base station or Sink. Since sensor nodes are made of cheap components and are deployed in remote and uncontrolled environments, they are prone to failure; thus, maintaining a network with its proper functions even when undesired events occur is necessary which is called fault tolerance. Hence, fault management is essential in these networks. In this paper, a new method has been proposed with particular attention to fault tolerance and fault detection in WSN. The performance of the proposed method was simulated in MATLAB. The proposed method was based on majority vote which can detect permanently faulty sensor nodes with high detection. Accuracy and low false alarm rate were excluded them from the network. To investigate the efficiency of the new method, the researchers compared it with Chen, Lee, and hybrid algorithms. Simulation results indicated that the novel proposed method has better performance in parameters such as detection accuracy (DA) and a false alarm rate (FAR) even with a large set of faulty sensor nodes.
Sh. Mehrjoo; M. Jasemi; A. Mahmoudi
Abstract
In this paper after a general literature review on the concept of Efficient Frontier (EF), an important inadequacy of the Variance based models for deriving EFs and the high necessity for applying another risk measure is exemplified. In this regard for this study the risk measure of Lower Partial Moment ...
Read More
In this paper after a general literature review on the concept of Efficient Frontier (EF), an important inadequacy of the Variance based models for deriving EFs and the high necessity for applying another risk measure is exemplified. In this regard for this study the risk measure of Lower Partial Moment of the first order is decided to replace Variance. Because of the particular shape of the proposed risk measure, one part of the paper is devoted to development of a mechanism for deriving EF on the basis of new model. After that superiority of the new model to old one is shown and then the shape of new EFs under different situations is investigated. At last it is concluded that application of LPM of the first order in financial models in the phase of deriving EF is completely wise and justifiable.
Mohaddeseh Dashti; Vali Derhami; Esfandiar Ekhtiyari
Abstract
Yarn tenacity is one of the most important properties in yarn production. This paper addresses modeling of yarn tenacity as well as optimally determining the amounts of the effective inputs to produce yarn with desired tenacity. The artificial neural network is used as a suitable structure for tenacity ...
Read More
Yarn tenacity is one of the most important properties in yarn production. This paper addresses modeling of yarn tenacity as well as optimally determining the amounts of the effective inputs to produce yarn with desired tenacity. The artificial neural network is used as a suitable structure for tenacity modeling of cotton yarn with 30 Ne. As the first step for modeling, the empirical data is collected for cotton yarns. Then, the structure of the neural network is determined and its parameters are adjusted by back propagation method. The efficiency and accuracy of the neural model is measured based on percentage of error as well as coefficient determination. The obtained experimental results show that the neural model could predicate the tenacity with less than 3.5% error. Afterwards, utilizing genetic algorithms, a new method is proposed for optimal determination of input values in yarn production to reach the desired tenacity. We conducted several experiments for different ranges with various production cost functions. The proposed approach could find the best input values to reach the desired tenacity considering the production costs.
Seyed Mojtaba Hosseinirad; S.K. Basu
Abstract
In this paper, we study WSN design, as a multi-objective optimization problem using GA technique. We study the effects of GA parameters including population size, selection and crossover method and mutation probability on the design. Choosing suitable parameters is a trade-off between different network ...
Read More
In this paper, we study WSN design, as a multi-objective optimization problem using GA technique. We study the effects of GA parameters including population size, selection and crossover method and mutation probability on the design. Choosing suitable parameters is a trade-off between different network criteria and characteristics. Type of deployment, effect of network size, radio communication radius, density of sensors in an application area, and location of base station are the WSN’s characteristics investigated here. The simulation results of this study indicate that the value of radio communication radius has direct effect on radio interference, cluster-overlapping, sensor node distribution uniformity, communication energy. The optimal value of radio communication radius is not dependent on network size and type of deployment but on the density of network deployment. Location of the base station affects radio communication energy, cluster-overlapping and average number of communication per cluster head. BS located outside the application domain is preferred over that located inside. In all the network situations, random deployment has better performance compared with grid deployment.
Mohammad AllamehAmiri; Vali Derhami; Mohammad Ghasemzadeh
Abstract
Quality of service (QoS) is an important issue in the design and management of web service composition. QoS in web services consists of various non-functional factors, such as execution cost, execution time, availability, successful execution rate, and security. In recent years, the number of available ...
Read More
Quality of service (QoS) is an important issue in the design and management of web service composition. QoS in web services consists of various non-functional factors, such as execution cost, execution time, availability, successful execution rate, and security. In recent years, the number of available web services has proliferated, and then offered the same services increasingly. The same web services are distinguished based on their quality parameters. Also, clients usually demand more value added services rather than those offered by single, isolated web services. Therefore, selecting a composition plan of web services among numerous plans satisfies client requirements and has become a challenging and time-consuming problem. This paper has proposed a new composition plan optimizer with constraints based on genetic algorithm. The proposed method can find the composition plan that satisfies user constraints efficiently. The performance of the method is evaluated in a simulated environment.