S. Hosseini; M. Khorashadizade
Abstract
High dimensionality is the biggest problem when working with large datasets. Feature selection is a procedure for reducing the dimensionality of datasets by removing additional and irrelevant features; the most effective features in the dataset will remain, increasing the algorithms’ performance. ...
Read More
High dimensionality is the biggest problem when working with large datasets. Feature selection is a procedure for reducing the dimensionality of datasets by removing additional and irrelevant features; the most effective features in the dataset will remain, increasing the algorithms’ performance. In this paper, a novel procedure for feature selection is presented that includes a binary teaching learning-based optimization algorithm with mutation (BMTLBO). The TLBO algorithm is one of the most efficient and practical optimization techniques. Although this algorithm has fast convergence speed and it benefits from exploration capability, there may be a possibility of trapping into a local optimum. So, we try to establish a balance between exploration and exploitation. The proposed method is in two parts: First, we used the binary version of the TLBO algorithm for feature selection and added a mutation operator to implement a strong local search capability (BMTLBO). Second, we used a modified TLBO algorithm with the self-learning phase (SLTLBO) for training a neural network to show the application of the classification problem to evaluate the performance of the procedures of the method. We tested the proposed method on 14 datasets in terms of classification accuracy and the number of features. The results showed BMTLBO outperformed the standard TLBO algorithm and proved the potency of the proposed method. The results are very promising and close to optimal.
Seyedeh R. Mahmudi Nezhad Dezfouli; Y. Kyani; Seyed A. Mahmoudinejad Dezfouli
Abstract
Due to the small size, low contrast, variable position, shape, and texture of multiple sclerosis lesions, one of the challenges of medical image processing is the automatic diagnosis and segmentation of multiple sclerosis lesions in Magnetic resonance images. Early diagnosis of these lesions in the first ...
Read More
Due to the small size, low contrast, variable position, shape, and texture of multiple sclerosis lesions, one of the challenges of medical image processing is the automatic diagnosis and segmentation of multiple sclerosis lesions in Magnetic resonance images. Early diagnosis of these lesions in the first stages of the disease can effectively diagnose and evaluate treatment. Also, automated segmentation is a powerful tool to assist professionals in improving the accuracy of disease diagnosis. This study uses modified adaptive multi-level conditional random fields and the artificial neural network to segment and diagnose multiple sclerosis lesions. Instead of assuming model coefficients as constant, they are considered variables in multi-level statistical models. This study aimed to evaluate the probability of lesions based on the severity, texture, and adjacent areas. The proposed method is applied to 130 MR images of multiple sclerosis patients in two test stages and resulted in 98% precision. Also, the proposed method has reduced the error detection rate by correcting the lesion boundaries using the average intensity of neighborhoods, rotation invariant, and texture for very small voxels with a size of 3-5 voxels, and it has shown very few false-positive lesions. The proposed model resulted in a high sensitivity of 91% with a false positive average of 0.5.
M. Zarbazoo Siahkali; A.A. Ghaderi; Abdol H. Bahrpeyma; M. Rashki; N. Safaeian Hamzehkolaei
Abstract
Scouring, occurring when the water flow erodes the bed materials around the bridge pier structure, is a serious safety assessment problem for which there are many equations and models in the literature to estimate the approximate scour depth. This research is aimed to study how surrogate models estimate ...
Read More
Scouring, occurring when the water flow erodes the bed materials around the bridge pier structure, is a serious safety assessment problem for which there are many equations and models in the literature to estimate the approximate scour depth. This research is aimed to study how surrogate models estimate the scour depth around circular piers and compare the results with those of the empirical formulations. To this end, the pier scour depth was estimated in non-cohesive soils based on a subcritical flow and live bed conditions using the artificial neural networks (ANN), group method of data handling (GMDH), multivariate adaptive regression splines (MARS) and Gaussian process models (Kriging). A database containing 246 lab data gathered from various studies was formed and the data were divided into three random parts: 1) training, 2) validation and 3) testing to build the surrogate models. The statistical error criteria such as the coefficient of determination (R2), root mean squared error (RMSE), mean absolute percentage error (MAPE) and absolute maximum percentage error (MPE) of the surrogate models were then found and compared with those of the popular empirical formulations. Results revealed that the surrogate models’ test data estimations were more accurate than those of the empirical equations; Kriging has had better estimations than other models. In addition, sensitivity analyses of all surrogate models showed that the pier width’s dimensionless expression (b/y) had a greater effect on estimating the normalized scour depth (Ds/y).
H.5. Image Processing and Computer Vision
Seyed M. Ghazali; Y. Baleghi
Abstract
Observation in absolute darkness and daytime under every atmospheric situation is one of the advantages of thermal imaging systems. In spite of increasing trend of using these systems, there are still lots of difficulties in analysing thermal images due to the variable features of pedestrians and atmospheric ...
Read More
Observation in absolute darkness and daytime under every atmospheric situation is one of the advantages of thermal imaging systems. In spite of increasing trend of using these systems, there are still lots of difficulties in analysing thermal images due to the variable features of pedestrians and atmospheric situations. In this paper an efficient method is proposed for detecting pedestrians in outdoor thermal images that adapts to variable atmospheric situations. In the first step, the type of atmospheric situation is estimated based on the global features of the thermal image. Then, for each situation, a relevant algorithm is performed for pedestrian detection. To do this, thermal images are divided into three classes of atmospheric situations: a) fine such as sunny weather, b) bad such as rainy and hazy weather, c) hot such as hot summer days where pedestrians are darker than background. Then 2-Dimensional Double Density Dual Tree Discrete Wavelet Transform (2D DD DT DWT) in three levels is acquired from input images and the energy of low frequency coefficients in third level is calculated as the discriminating feature for atmospheric situation identification. Feed-forward neural network (FFNN) classifier is trained by this feature vector to determine the category of atmospheric situation. Finally, a predetermined algorithm that is relevant to the category of atmospheric situation is applied for pedestrian detection. The proposed method in pedestrian detection has high performance so that the accuracy of pedestrian detection in two popular databases is more than 99%.
I.3.7. Engineering
B. Hosseinzadeh Samani; H. HouriJafari; H. Zareiforoush
Abstract
In this study, the energy consumption in the food and beverage industries of Iran was investigated. The energy consumption in this sector was modeled using artificial neural network (ANN), response surface methodology (RSM) and genetic algorithm (GA). First, the input data to the model were calculated ...
Read More
In this study, the energy consumption in the food and beverage industries of Iran was investigated. The energy consumption in this sector was modeled using artificial neural network (ANN), response surface methodology (RSM) and genetic algorithm (GA). First, the input data to the model were calculated according to the statistical source, balance-sheets and the method proposed in this paper. It can be seen that diesel and liquefied petroleum gas have respectively the highest and lowest shares of energy consumption compared with the other types of carriers. For each of the evaluated energy carriers (diesel, kerosene, fuel oil, natural gas, electricity, liquefied petroleum gas and gasoline), the best fitting model was selected after taking the average of runs of the developed models. At last, the developed models, representing the energy consumption of food and beverage industries by each energy carrier, were put into a finalized model using Simulink toolbox of Matlab software. Results of data analysis indicated that consumption of natural gas is being increased in Iran food and beverage industries, while in the case of fuel oil and liquefied petroleum gas a decreasing trend was estimated.
F.4.4. Experimental design
V. Khoshdel; A. R Akbarzadeh
Abstract
This paper presents an application of design of experiments techniques to determine the optimized parameters of artificial neural network (ANN), which are used to estimate force from Electromyogram (sEMG) signals. The accuracy of ANN model is highly dependent on the network parameters settings. There ...
Read More
This paper presents an application of design of experiments techniques to determine the optimized parameters of artificial neural network (ANN), which are used to estimate force from Electromyogram (sEMG) signals. The accuracy of ANN model is highly dependent on the network parameters settings. There are plenty of algorithms that are used to obtain the optimal ANN setting. However, to the best of our knowledge they did not use regression analysis to model the effect of each parameter as well as present the percent contribution and significance level of the ANN parameters for force estimation. In this paper, sEMG experimental data are collected and the ANN parameters based on an orthogonal array design table are regulated to train the ANN. Taguchi help us to find the optimal parameters settings. Next, analysis of variance (ANOVA) technique is used to obtain significance level as well as contribution percentage of each parameter to optimize ANN’s modeling in human force estimation. The results indicated that design of experiments is a promising solution to estimate the human force from sEMG signals.
Mohaddeseh Dashti; Vali Derhami; Esfandiar Ekhtiyari
Abstract
Yarn tenacity is one of the most important properties in yarn production. This paper addresses modeling of yarn tenacity as well as optimally determining the amounts of the effective inputs to produce yarn with desired tenacity. The artificial neural network is used as a suitable structure for tenacity ...
Read More
Yarn tenacity is one of the most important properties in yarn production. This paper addresses modeling of yarn tenacity as well as optimally determining the amounts of the effective inputs to produce yarn with desired tenacity. The artificial neural network is used as a suitable structure for tenacity modeling of cotton yarn with 30 Ne. As the first step for modeling, the empirical data is collected for cotton yarns. Then, the structure of the neural network is determined and its parameters are adjusted by back propagation method. The efficiency and accuracy of the neural model is measured based on percentage of error as well as coefficient determination. The obtained experimental results show that the neural model could predicate the tenacity with less than 3.5% error. Afterwards, utilizing genetic algorithms, a new method is proposed for optimal determination of input values in yarn production to reach the desired tenacity. We conducted several experiments for different ranges with various production cost functions. The proposed approach could find the best input values to reach the desired tenacity considering the production costs.