Original/Review Paper
A.10. Power Management
F. Sabahi
Abstract
This paper develops an energy management approach for a multi-microgrid (MMG) taking into account multiple objectives involving plug-in electric vehicle (PEV), photovoltaic (PV) power, and a distribution static compensator (DSTATCOM) to improve power provision sharing. In the proposed approach, there ...
Read More
This paper develops an energy management approach for a multi-microgrid (MMG) taking into account multiple objectives involving plug-in electric vehicle (PEV), photovoltaic (PV) power, and a distribution static compensator (DSTATCOM) to improve power provision sharing. In the proposed approach, there is a pool of fuzzy microgrids granules that they compete with each other to prolong their lives while monitored and evaluated by the specific fuzzy sets. In addition, based on the hourly reconfiguration of microgrids (MGs), granules learn to dispatch cost-effective resources. To promote interactive service, a well-defined, multi-objective approach is derived from fuzzy granulation analysis to improve power quality in MMGs. A combination of the meta-heuristic approach of genetic algorithm (GA) and particle swarm optimization (PSO) eliminates the computational difficulty of the nonlinearity and uncertainty analysis of the system and improves the precision of the results. The proposed approach is successfully applied to a 69-bus MMG test with results reported in terms of stored energy improvement, daily voltage profile improvement, MMG operations, and cost reduction.
Original/Review Paper
N. Majidi; K. Kiani; R. Rastgoo
Abstract
This study presents a method to reconstruct a high-resolution image using a deep convolution neural network. We propose a deep model, entitled Deep Block Super Resolution (DBSR), by fusing the output features of a deep convolutional network and a shallow convolutional network. In this way, our model ...
Read More
This study presents a method to reconstruct a high-resolution image using a deep convolution neural network. We propose a deep model, entitled Deep Block Super Resolution (DBSR), by fusing the output features of a deep convolutional network and a shallow convolutional network. In this way, our model benefits from high frequency and low frequency features extracted from deep and shallow networks simultaneously. We use the residual layers in our model to make repetitive layers, increase the depth of the model, and make an end-to-end model. Furthermore, we employed a deep network in up-sampling step instead of bicubic interpolation method used in most of the previous works. Since the image resolution plays an important role to obtain rich information from the medical images and helps for accurate and faster diagnosis of the ailment, we use the medical images for resolution enhancement. Our model is capable of reconstructing a high-resolution image from low-resolution one in both medical and general images. Evaluation results on TSA and TZDE datasets, including MRI images, and Set5, Set14, B100, and Urban100 datasets, including general images, demonstrate that our model outperforms state-of-the-art alternatives in both areas of medical and general super-resolution enhancement from a single input image.
Original/Review Paper
R. Mohammadian; M. Mahlouji; A. Shahidinejad
Abstract
Multi-view face detection in open environments is a challenging task, due to the wide variations in illumination, face appearances and occlusion. In this paper, a robust method for multi-view face detection in open environments, using a combination of Gabor features and neural networks, is presented. ...
Read More
Multi-view face detection in open environments is a challenging task, due to the wide variations in illumination, face appearances and occlusion. In this paper, a robust method for multi-view face detection in open environments, using a combination of Gabor features and neural networks, is presented. Firstly, the effect of changing the Gabor filter parameters (orientation, frequency, standard deviation, aspect ratio and phase offset) for an image is analysed, secondly, the range of Gabor filter parameter values is determined and finally, the best values for these parameters are specified. A multilayer feedforward neural network with a back-propagation algorithm is used as a classifier. The input vector is obtained by convolving the input image and a Gabor filter, with both the angle and frequency values equal to π/2. The proposed algorithm is tested on 1,484 image samples with simple and complex backgrounds. The experimental results show that the proposed detector achieves great detection accuracy, by comparing it with several popular face-detection algorithms, such as OpenCV’s Viola-Jones detector.
Original/Review Paper
A. Omondi; I.A. Lukandu; G.W. Wanyembi
Abstract
Redundant and irrelevant features in high dimensional data increase the complexity in underlying mathematical models. It is necessary to conduct pre-processing steps that search for the most relevant features in order to reduce the dimensionality of the data. This study made use of a meta-heuristic search ...
Read More
Redundant and irrelevant features in high dimensional data increase the complexity in underlying mathematical models. It is necessary to conduct pre-processing steps that search for the most relevant features in order to reduce the dimensionality of the data. This study made use of a meta-heuristic search approach which uses lightweight random simulations to balance between the exploitation of relevant features and the exploration of features that have the potential to be relevant. In doing so, the study evaluated how effective the manipulation of the search component in feature selection is on achieving high accuracy with reduced dimensions. A control group experimental design was used to observe factual evidence. The context of the experiment was the high dimensional data experienced in performance tuning of complex database systems. The Wilcoxon signed-rank test at .05 level of significance was used to compare repeated classification accuracy measurements on the independent experiment and control group samples. Encouraging results with a p-value < 0.05 were recorded and provided evidence to reject the null hypothesis in favour of the alternative hypothesis which states that meta-heuristic search approaches are effective in achieving high accuracy with reduced dimensions depending on the outcome variable under investigation.
Original/Review Paper
Z. Anari; A. Hatamlou; B. Anari; M. Masdari
Abstract
The Transactions in web data often consist of quantitative data, suggesting that fuzzy set theory can be used to represent such data. The time spent by users on each web page is one type of web data, was regarded as a trapezoidal membership function (TMF) and can be used to evaluate user browsing behavior. ...
Read More
The Transactions in web data often consist of quantitative data, suggesting that fuzzy set theory can be used to represent such data. The time spent by users on each web page is one type of web data, was regarded as a trapezoidal membership function (TMF) and can be used to evaluate user browsing behavior. The quality of mining fuzzy association rules depends on membership functions and since the membership functions of each web page are different from those of other web pages, so automatic finding the number and position of TMF is significant. In this paper, a different reinforcement-based optimization approach called LA-OMF was proposed to find both the number and positions of TMFs for fuzzy association rules. In the proposed algorithm, the centers and spreads of TMFs were considered as parameters of the search space, and a new representation using learning automata (LA) was proposed to optimize these parameters. The performance of the proposed approach was evaluated and the results were compared with the results of other algorithms on a real dataset. Experiments on datasets with different sizes confirmed that the proposed LA-OMF improved the efficiency of mining fuzzy association rules by extracting optimized membership functions.
Original/Review Paper
J. Tayyebi; E. Hosseinzadeh
Abstract
The fuzzy c-means clustering algorithm is a useful tool for clustering; but it is convenient only for crisp complete data. In this article, an enhancement of the algorithm is proposed which is suitable for clustering trapezoidal fuzzy data. A linear ranking function is used to define a distance for trapezoidal ...
Read More
The fuzzy c-means clustering algorithm is a useful tool for clustering; but it is convenient only for crisp complete data. In this article, an enhancement of the algorithm is proposed which is suitable for clustering trapezoidal fuzzy data. A linear ranking function is used to define a distance for trapezoidal fuzzy data. Then, as an application, a method based on the proposed algorithm is presented to cluster incomplete fuzzy data. The method substitutes missing attribute by a trapezoidal fuzzy number to be determined by using the corresponding attribute of q nearest-neighbor. Comparisons and analysis of the experimental results demonstrate the capability of the proposed method.
Original/Review Paper
M. Danesh; S. Danesh
Abstract
This paper presents a new method for regression model prediction in an uncertain environment. In practical engineering problems, in order to develop regression or ANN model for making predictions, the average of set of repeated observed values are introduced to the model as an input variable. Therefore, ...
Read More
This paper presents a new method for regression model prediction in an uncertain environment. In practical engineering problems, in order to develop regression or ANN model for making predictions, the average of set of repeated observed values are introduced to the model as an input variable. Therefore, the estimated response of the process is also the average of a set of output values where the variation around the mean is not determinate. However, to provide unbiased and precise estimations, the predictions are required to be correct on average and the spread of date be specified. To address this issue, we proposed a method based on the fuzzy inference system, and genetic and linear programming algorithms. We consider the crisp inputs and the symmetrical triangular fuzzy output. The proposed algorithm is applied to fit the fuzzy regression model. In addition, we apply a simulation example and a practical example in the field of machining process to assess the performance of the proposed method in dealing with practical problems in which the output variables have the nature of uncertainty and impression. Finally, we compare the performance of the suggested method with other methods. Based on the examples, the proposed method is verified for prediction. The results show that the proposed method reduces the error values to a minimum level and is more accurate than the Linear Programming (LP) and fuzzy weights with linear programming (FWLP) methods.
Original/Review Paper
I Pasandideh; A. Rajabi; F. Yosefvand; S. Shabanlou
Abstract
Generally, length of hydraulic jump is one the most important parameters to design stilling basin. In this study, the length of hydraulic jump on sloping rough beds was predicted using Gene Expression Programming (GEP) for the first time. The Monte Carlo simulations were used to examine the ability of ...
Read More
Generally, length of hydraulic jump is one the most important parameters to design stilling basin. In this study, the length of hydraulic jump on sloping rough beds was predicted using Gene Expression Programming (GEP) for the first time. The Monte Carlo simulations were used to examine the ability of the GEP model. In addition, k-fold cross validation was employed in order to verify the results of the GEP model. To determine the length of hydraulic jump, five different GEP models were introduced using input parameters. Then by analyzing the GEP models results, the superior model was presented. For the superior model, correlation coefficient (R), Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) were computed 0.901, 11.517 and 1.664, respectively. According to the sensitivity analysis, the Froude number at upstream of hydraulic jump was identified as the most important parameter to model the length of hydraulic jump. Furthermore, the partial derivative sensitivity analysis (PDSA) was performed. For instance, the PDSA was calculated as positive for all input variables.
Applied Article
H. Rahmani; H. Kamali; H. Shah-Hosseini
Abstract
Nowadays, a significant amount of studies are devoted to discovering important nodes in graph data. Social networks as graph data have attracted a lot of attention. There are various purposes for discovering the important nodes in social networks such as finding the leaders in them, i.e. the users who ...
Read More
Nowadays, a significant amount of studies are devoted to discovering important nodes in graph data. Social networks as graph data have attracted a lot of attention. There are various purposes for discovering the important nodes in social networks such as finding the leaders in them, i.e. the users who play an important role in promoting advertising, etc. Different criteria have been proposed in discovering important nodes in graph data. Measuring a node’s importance by a single criterion may be inefficient due to the variety of graph structures. Recently, a combination of criteria has been used in the discovery of important nodes. In this paper, we propose a system for the Discovery of Important Nodes in social networks using Genetic Algorithms (DINGA). In our proposed system, important nodes in social networks are discovered by employing a combination of eight informative criteria and their intelligent weighting. We compare our results with a manually weighted method, that uses random weightings for each criterion, in four real networks. Our method shows an average of 22% improvement in the accuracy of important nodes discovery.
Methodologies
M. Babazadeh Shareh; H.R. Navidi; H. Haj Seyed Javadi; M. HosseinZadeh
Abstract
In cooperative P2P networks, there are two kinds of illegal users, namely free riders and Sybils. Free riders are those who try to receive services without any sort of cost. Sybil users are rational peers which have multiple fake identities. There are some techniques to detect free riders and Sybil users ...
Read More
In cooperative P2P networks, there are two kinds of illegal users, namely free riders and Sybils. Free riders are those who try to receive services without any sort of cost. Sybil users are rational peers which have multiple fake identities. There are some techniques to detect free riders and Sybil users which have previously been proposed by a number of researchers such as the Tit-for-tat and Sybil guard techniques. Although such previously proposed techniques were quite successful in detecting free riders and Sybils individually, there is no technique capable of detecting both these riders simultaneously. Therefore, the main objective of this research is to propose a single mechanism to detect both kinds of these illegal users based on Game theory. Obtaining new centrality and bandwidth contribution formulas with an incentive mechanism approach is the basic idea of the present research’s proposed solution. The result of this paper shows that as the life of the network passes, free riders are identified, and through detecting Sybil nodes, the number of services offered to them will be decreased.
Original/Review Paper
S. Javadi; R. Safa; M. Azizi; Seyed A. Mirroshandel
Abstract
Online scientific communities are bases that publish books, journals, and scientific papers, and help promote knowledge. The researchers use search engines to find the given information including scientific papers, an expert to collaborate with, and the publication venue, but in many cases due to search ...
Read More
Online scientific communities are bases that publish books, journals, and scientific papers, and help promote knowledge. The researchers use search engines to find the given information including scientific papers, an expert to collaborate with, and the publication venue, but in many cases due to search by keywords and lack of attention to the content, they do not achieve the desired results at the early stages. Online scientific communities can increase the system efficiency to respond to their users utilizing a customized search. In this paper, using a dataset including bibliographic information of user’s publication, the publication venues, and other published papers provided as a way to find an expert in a particular context where experts are recommended to a user according to his records and preferences. In this way, a user request to find an expert is presented with keywords that represent a certain expertise and the system output will be a certain number of ranked suggestions for a specific user. Each suggestion is the name of an expert who has been identified appropriate to collaborate with the user. In evaluation using IEEE database, the proposed method reached an accuracy of 71.50 percent that seems to be an acceptable result.
Applied Article
R. Azizi; A. M. Latif
Abstract
In this work, we show that an image reconstruction from a burst of individually demosaicked RAW captures propagates demosaicking artifacts throughout the image processing pipeline. Hence, we propose a joint regularization scheme for burst denoising and demosaicking. We model the burst alignment functions ...
Read More
In this work, we show that an image reconstruction from a burst of individually demosaicked RAW captures propagates demosaicking artifacts throughout the image processing pipeline. Hence, we propose a joint regularization scheme for burst denoising and demosaicking. We model the burst alignment functions and the color filter array sampling functions into one linear operator. Then, we formulate the individual burst reconstruction and the demosaicking problems into a three-color-channel optimization problem. We introduce a crosschannel prior to the solution of this optimization problem and develop a numerical solver via alternating direction method of multipliers. Moreover, our proposed method avoids the complexity of alignment estimation as a preprocessing step for burst reconstruction. It relies on a phase correlation approach in the Fourier’s domain to efficiently find the relative translation, rotation, and scale among the burst captures and to perform warping accordingly. As a result of these steps, the proposed joint burst denoising and demosaicking solution improves the quality of reconstructed images by a considerable margin compared to existing image model-based methods.