H.6.4. Clustering
M. Manteqipour; A.R. Ghaffari Hadigheh; R. Mahmoodvand; A. Safari
Abstract
Grouping datasets plays an important role in many scientific researches. Depending on data features and applications, different constrains are imposed on groups, while having groups with similar members is always a main criterion. In this paper, we propose an algorithm for grouping the objects with random ...
Read More
Grouping datasets plays an important role in many scientific researches. Depending on data features and applications, different constrains are imposed on groups, while having groups with similar members is always a main criterion. In this paper, we propose an algorithm for grouping the objects with random labels, nominal features having too many nominal attributes. In addition, the size constraint on groups is necessary. These conditions lead to a mixed integer optimization problem which is not convex nor linear. It is an NP-hard problem and exact solution methods are computationally costly. Our motivation to solve such a problem comes along with grouping insurance data which is essential for fair pricing. The proposed algorithm includes two phases. First, we rank random labels using fuzzy numbers. Afterwards, an adjusted K-means algorithm is used to produce homogenous groups satisfying a cluster size constraint. Fuzzy numbers are used to compare random labels, in both observed values and their chance of occurrence. Moreover, an index is defined to find the similarity of multi-valued attributes without perfect information with those accompanied with perfect information. Since all ranks are scaled into the interval [0,1], the result of ranking random labels does not need rescaling techniques. In the adjusted K-means algorithm, the optimum number of clusters is found using coefficient of variation instead of Euclidean distance. Experiments demonstrate that our proposed algorithm produces fairly homogenous and significantly different groups having requisite mass.
D.3. Data Storage Representations
E. Fadaei-Kermani; G. A Barani; M. Ghaeini-Hessaroeyeh
Abstract
Drought is a climate phenomenon which might occur in any climate condition and all regions on the earth. Effective drought management depends on the application of appropriate drought indices. Drought indices are variables which are used to detect and characterize drought conditions. In this study, it ...
Read More
Drought is a climate phenomenon which might occur in any climate condition and all regions on the earth. Effective drought management depends on the application of appropriate drought indices. Drought indices are variables which are used to detect and characterize drought conditions. In this study, it was tried to predict drought occurrence, based on the standard precipitation index (SPI), using k-nearest neighbor modeling. The model was tested by using precipitation data of Kerman, Iran. Results showed that the model gives reasonable predictions of drought situation in the region. Finally, the efficiency and precision of the model was quantified by some statistical coefficients. Appropriate values of the correlation coefficient (r=0.874), mean absolute error (MAE=0.106), root mean square error (RMSE=0.119) and coefficient of residual mass (CRM=0.0011) indicated that the present model is suitable and efficient
G.3.5. Systems
M. Rezvani
Abstract
Cloud computing has become an attractive target for attackers as the mainstream technologies in the cloud, such as the virtualization and multitenancy, permit multiple users to utilize the same physical resource, thereby posing the so-called problem of internal facing security. Moreover, the traditional ...
Read More
Cloud computing has become an attractive target for attackers as the mainstream technologies in the cloud, such as the virtualization and multitenancy, permit multiple users to utilize the same physical resource, thereby posing the so-called problem of internal facing security. Moreover, the traditional network-based intrusion detection systems (IDSs) are ineffective to be deployed in the cloud environments. This is because that such IDSs employ only the network information in their detection engine and this, therefore, makes them ineffective for the cloud-specific vulnerabilities. In this paper, we propose a novel assessment methodology for anomaly-based IDSs for cloud computing which takes into account both network and system-level information for generating the evaluation dataset. In addition, our approach deploys the IDS sensors in each virtual machine in order to develop a cooperative anomaly detection engine. The proposed assessment methodology is then deployed in a testbed cloud environment to generate an IDS dataset which includes both network and system-level features. Finally, we evaluate the performance of several machine learning algorithms over the generated dataset. Our experimental results demonstrate that the proposed IDS assessment approach is effective for attack detection in the cloud as most of the algorithms are able to identify the attacks with a high level of accuracy.
Deepika Koundal
Abstract
The enormous growth of the World Wide Web in recent years has made it necessary to perform resource discovery efficiently. For a crawler it is not an simple task to download the domain specific web pages. This unfocused approach often shows undesired results. Therefore, several new ideas have been proposed, ...
Read More
The enormous growth of the World Wide Web in recent years has made it necessary to perform resource discovery efficiently. For a crawler it is not an simple task to download the domain specific web pages. This unfocused approach often shows undesired results. Therefore, several new ideas have been proposed, among them a key technique is focused crawling which is able to crawl particular topical portions of the World Wide Web quickly without having to explore all web pages. Focused crawling is a technique which is able to crawled particular topics quickly and efficiently without exploring all WebPages. The proposed approach does not only use keywords for the crawl, but rely on high-level background knowledge with concepts and relations, which are compared with the texts of the searched page. In this paper a combined crawling strategy is proposed that integrates the link analysis algorithm with association metric. An approach is followed to find out the relevant pages before the process of crawling and to prioritizing the URL queue for downloading higher relevant pages, to an optimal level based on domain dependent ontology. This strategy make use of ontology to estimate the semantic contents of the URL without exploring which in turn strengthen the ordering metric for URL queue and leads to the retrieval of most relevant pages.
F.2.7. Optimization
M. YousefiKhoshbakht; N. Mahmoodi Darani
Abstract
Abstract: The Open Vehicle Routing Problem (OVRP) is one of the most important extensions of the vehicle routing problem (VRP) that has many applications in industrial and service. In the VRP, a set of customers with a specified demand of goods are given and a depot where a fleet of identical capacitated ...
Read More
Abstract: The Open Vehicle Routing Problem (OVRP) is one of the most important extensions of the vehicle routing problem (VRP) that has many applications in industrial and service. In the VRP, a set of customers with a specified demand of goods are given and a depot where a fleet of identical capacitated vehicles is located. We are also given the ‘‘traveling costs’’ between the depot and all the customers, and between each pair of customers. In the OVRP against to VRP, vehicles are not required to return to the depot after completing service. Because VRP and OVRP belong to NP-hard Problems, an efficient hybrid elite ant system called EACO is proposed for solving them in the paper. In this algorithm, a modified tabu search (TS), a new state transition rule and a modified pheromone updating rule are used for more improving solutions. These modifications lead that the proposed algorithm does not trapped at the local optimum and discovers different parts of the solution space. Computational results on fourteen standard benchmark instances for VRP and OVRP show that EACO finds the best known solutions for most of the instances and is comparable in terms of solutions quality to the best performing published metaheuristics in the literature.
H.3.2.15. Transportation
S. Mostafaei; H. Ganjavi; R. Ghodsi
Abstract
In this paper, the relation among factors in the road transportation sector from March, 2005 to March, 2011 is analyzed. Most of the previous studies have economical point of view on gasoline consumption. Here, a new approach is proposed in which different data mining techniques are used to extract meaningful ...
Read More
In this paper, the relation among factors in the road transportation sector from March, 2005 to March, 2011 is analyzed. Most of the previous studies have economical point of view on gasoline consumption. Here, a new approach is proposed in which different data mining techniques are used to extract meaningful relations between the aforementioned factors. The main and dependent factor is gasoline consumption. First, the data gathered from different organizations is analyzed by feature selection algorithm to investigate how many of these independent factors have influential effect on the dependent factor. A few of these factors were determined as unimportant and were deleted from the analysis. Two association rule mining algorithms, Apriori and Carma are used to analyze these data. These data which are continuous cannot be handled by these two algorithms. Therefore, the two-step clustering algorithm is used to discretize the data. Association rule mining analysis shows that fewer vehicles, gasoline rationing, and high taxi trips are the main factors that caused low gasoline consumption. Carma results show that the number of taxi trips increase after gasoline rationing. Results also showed that Carma can reach all rules that are achieved by Apriori algorithm. Finaly it showed that association rule mining algorithm results are more informative than statistical correlation analysis.
G.3.5. Systems
A. Moshar Movahhed; H. Toossian Shandiz; Syed K. Hoseini Sani
Abstract
In this paper fractional order averaged model for DC/DC Buck converter in continues condition mode (CCM) operation is established. DC/DC Buck converter is one of the main components in the wind turbine system which is used in this research. Due to some practical restriction there weren’t exist ...
Read More
In this paper fractional order averaged model for DC/DC Buck converter in continues condition mode (CCM) operation is established. DC/DC Buck converter is one of the main components in the wind turbine system which is used in this research. Due to some practical restriction there weren’t exist input voltage and duty cycle of converter therefor whole of the wind system was simulated in Matlab/Simulink and gathered data is used in proposed method based on trial and error in order to find the fractional order of converter. There is an obvious relationship between controller performance and mathematical model. More accurate model leads us to better controller.
Document and Text Processing
A. Pouramini; S. Khaje Hassani; Sh. Nasiri
Abstract
In this paper, we present an approach and a visual tool, called HWrap (Handle Based Wrapper), for creating web wrappers to extract data records from web pages. In our approach, we mainly rely on the visible page content to identify data regions on a web page. In our extraction algorithm, we inspired ...
Read More
In this paper, we present an approach and a visual tool, called HWrap (Handle Based Wrapper), for creating web wrappers to extract data records from web pages. In our approach, we mainly rely on the visible page content to identify data regions on a web page. In our extraction algorithm, we inspired by the way a human user scans the page content for specific data. In particular, we use text features such as textual delimiters, keywords, constants or text patterns, which we call handles, to construct patterns for the target data regions and data records. We offer a polynomial algorithm, in which these patterns are checked against the page elements in a mixed bottom-up and top-down traverse of the DOM-tree. The extracted data is directly mapped onto a hierarchical XML structure, which forms the output of the wrapper. The wrappers that are generated by this method are robust and independent of the HTML structure. Therefore, they can be adapted to similar websites to gather and integrate information.
Mohammad Shahidol Islam
Abstract
Many researchers adopt Local Binary Pattern for pattern analysis. However, the long histogram created by Local Binary Pattern is not suitable for large-scale facial database. This paper presents a simple facial pattern descriptor for facial expression recognition. Local pattern is computed based on local ...
Read More
Many researchers adopt Local Binary Pattern for pattern analysis. However, the long histogram created by Local Binary Pattern is not suitable for large-scale facial database. This paper presents a simple facial pattern descriptor for facial expression recognition. Local pattern is computed based on local gradient flow from one side to another side through the center pixel in a 3x3 pixels region. The center pixel of that region is represented by two separate two-bit binary patterns, named as Local Gradient Pattern-LGP for that pixel. LGP pattern is extracted from each pixel. Facial image is divided into 81 equal sized blocks and the histograms of local LGP features for all 81 blocks are concatenated to build the feature vector. Experimental results prove that the proposed technique along with Support Vector Machine is effective for facial expression recognition.
H.3.13. Intelligent Web Services and Semantic Web
E. Shahsavari; S. Emadi
Abstract
Service-oriented architecture facilitates the running time of interactions by using business integration on the networks. Currently, web services are considered as the best option to provide Internet services. Due to an increasing number of Web users and the complexity of users’ queries, simple ...
Read More
Service-oriented architecture facilitates the running time of interactions by using business integration on the networks. Currently, web services are considered as the best option to provide Internet services. Due to an increasing number of Web users and the complexity of users’ queries, simple and atomic services are not able to meet the needs of users; and to provide complex services, it requires service composition. Web service composition as an effective approach to the integration of business institutions’ plans has taken significant acceleration. Nowadays, web services are created and updated in a moment. Therefore, in the real world, there are many services which may not have composability according to the conditions and constraints of the user's preferred choice. In the proposed method for automatic service composition, the main requirements of users including available inputs, expected outputs, quality of service, and the priority are initially and explicitly specified by the user and service composition is done with this information. In the proposed approach, due to a large number of services with the same functionality, at first, the candidate services are reduced by the quality of service-based Skyline method, and moreover, by using an algorithm based on graph search, all possible solutions will be produced. Finally, the user’s semantic constraints are applied on service composition, and the best composition is offered according to user’s requests. The result of this study shows that the proposed method is more scalable and efficient, and it offers a better solution by considering the user’s semantic constraints.
H.3.8. Natural Language Processing
S. Lazemi; H. Ebrahimpour-komleh
Abstract
Dependency parser is one of the most important fundamental tools in the natural language processing, which extracts structure of sentences and determines the relations between words based on the dependency grammar. The dependency parser is proper for free order languages, such as Persian. In this paper, ...
Read More
Dependency parser is one of the most important fundamental tools in the natural language processing, which extracts structure of sentences and determines the relations between words based on the dependency grammar. The dependency parser is proper for free order languages, such as Persian. In this paper, data-driven dependency parser has been developed with the help of phrase-structure parser for Persian. The defined feature space in each parser is one of the important factors in its success. Our goal is to generate and extract appropriate features to dependency parsing of Persian sentences. To achieve this goal, new semantic and syntactic features have been defined and added to the MSTParser by stacking method. Semantic features are obtained by using word clustering algorithms based on syntagmatic analysis and syntactic features are obtained by using the Persian phrase-structure parser and have been used as bit-string. Experiments have been done on the Persian Dependency Treebank (PerDT) and the Uppsala Persian Dependency Treebank (UPDT). The results indicate that the definition of new features improves the performance of the dependency parser for the Persian. The achieved unlabeled attachment score for PerDT and UPDT are 89.17% and 88.96% respectively.
H.3. Artificial Intelligence
M. Moradi Zirkohi
Abstract
In this paper, a high-performance optimal fractional emotional intelligent controller for an Automatic Voltage Regulator (AVR) in power system using Cuckoo optimization algorithm (COA) is proposed. AVR is the main controller within the excitation system that preserves the terminal voltage of a synchronous ...
Read More
In this paper, a high-performance optimal fractional emotional intelligent controller for an Automatic Voltage Regulator (AVR) in power system using Cuckoo optimization algorithm (COA) is proposed. AVR is the main controller within the excitation system that preserves the terminal voltage of a synchronous generator at a specified level. The proposed control strategy is based on brain emotional learning, which is a self-tuning controller so-called brain emotional learning based intelligent controller (BELBIC) and is based on sensory inputs and emotional cues. The major contribution of the paper is that to use the merits of fractional order PID (FOPID) controllers, a FOPID controller is employed to formulate stimulant input (SI) signal. This is a distinct advantage over published papers in the literature that a PID controller used to generate SI. Furthermore, another remarkable feature of the proposed approach is that it is a model-free controller. The proposed control strategy can be a promising controller in terms of simplicity of design, ease of implementation and less time-consuming. In addition, in order to enhance the performance of the proposed controller, its parameters are tuned by COA. In order to design BELBIC controller for AVR system a multi-objective optimization problem including overshoot, settling time, rise time and steady-state error is formulated. Simulation studies confirm that the proposed controller compared to classical PID and FOPID controllers introduced in the literature shows superior performance regarding model uncertainties. Having applied the proposed controller, the rise time and settling time are improved 47% and 57%, respectively.
Syed Abbas Taher; Mojtaba Pakdel
Abstract
For multi-objective optimal reactive power dispatch (MORPD), a new approach is proposed where simultaneous minimization of the active power transmission loss, the bus voltage deviation and the voltage stability index of a power system are achieved. Optimal settings of continuous and discrete control ...
Read More
For multi-objective optimal reactive power dispatch (MORPD), a new approach is proposed where simultaneous minimization of the active power transmission loss, the bus voltage deviation and the voltage stability index of a power system are achieved. Optimal settings of continuous and discrete control variables (e.g. generator voltages, tap positions of tap changing transformers and the number of shunt reactive compensation devices to be switched)are determined. MORPD is solved using particle swarm optimization (PSO). Also, Pareto Optimality PSO (POPSO) is proposed to improve the performance of the multi-objective optimization task defined with competing and non-commensurable objectives. The decision maker requires managing a representative Pareto-optimal set which is being provided by imposition of a hierarchical clustering algorithm. The proposed approach was tested using IEEE 30-bus and IEEE 118-bus test systems. When simulation results are compared with several commonly used algorithms, they indicate better performance and good potential for their efficient applications in solving MORPD problems.
H.6. Pattern Recognition
S. Ahmadkhani; P. Adibi; A. ahmadkhani
Abstract
In this paper, several two-dimensional extensions of principal component analysis (PCA) and linear discriminant analysis (LDA) techniques has been applied in a lossless dimensionality reduction framework, for face recognition application. In this framework, the benefits of dimensionality reduction were ...
Read More
In this paper, several two-dimensional extensions of principal component analysis (PCA) and linear discriminant analysis (LDA) techniques has been applied in a lossless dimensionality reduction framework, for face recognition application. In this framework, the benefits of dimensionality reduction were used to improve the performance of its predictive model, which was a support vector machine (SVM) classifier. At the same time, the loss of the useful information was minimized using the projection penalty idea. The well-known face databases were used to train and evaluate the proposed methods. The experimental results indicated that the proposed methods had a higher average classification accuracy in general compared to the classification based on Euclidean distance, and also compared to the methods which first extracted features based on dimensionality reduction technics, and then used SVM classifier as the predictive model.
Ali Harimi; Ali Shahzadi; Alireza Ahmadyfard; Khashayar Yaghmaie
Abstract
Speech Emotion Recognition (SER) is a new and challenging research area with a wide range of applications in man-machine interactions. The aim of a SER system is to recognize human emotion by analyzing the acoustics of speech sound. In this study, we propose Spectral Pattern features (SPs) and Harmonic ...
Read More
Speech Emotion Recognition (SER) is a new and challenging research area with a wide range of applications in man-machine interactions. The aim of a SER system is to recognize human emotion by analyzing the acoustics of speech sound. In this study, we propose Spectral Pattern features (SPs) and Harmonic Energy features (HEs) for emotion recognition. These features extracted from the spectrogram of speech signal using image processing techniques. For this purpose, details in the spectrogram image are firstly highlighted using histogram equalization technique. Then, directional filters are applied to decompose the image into 6 directional components. Finally, binary masking approach is employed to extract SPs from sub-banded images. The proposed HEs are also extracted by implementing the band pass filters on the spectrogram image. The extracted features are reduced in dimensions using a filtering feature selection algorithm based on fisher discriminant ratio. The classification accuracy of the pro-posed SER system has been evaluated using the 10-fold cross-validation technique on the Berlin database. The average recognition rate of 88.37% and 85.04% were achieved for females and males, respectively. By considering the total number of males and females samples, the overall recognition rate of 86.91% was obtained.
Milad Azarbad; Hamed Azami; Saeid Sanei; A Ebrahimzadeh
Abstract
The record of human brain neural activities, namely electroencephalogram (EEG), is generally known as a non-stationary and nonlinear signal. In many applications, it is useful to divide the EEGs into segments within which the signals can be considered stationary. Combination of empirical mode decomposition ...
Read More
The record of human brain neural activities, namely electroencephalogram (EEG), is generally known as a non-stationary and nonlinear signal. In many applications, it is useful to divide the EEGs into segments within which the signals can be considered stationary. Combination of empirical mode decomposition (EMD) and Hilbert transform, called Hilbert-Huang transform (HHT), is a new and powerful tool in signal processing. Unlike traditional time-frequency approaches, HHT exploits the nonlinearity of the medium and non-stationarity of the EEG signals. In addition, we use singular spectrum analysis (SSA) in the pre-processing step as an effective noise removal approach. By using synthetic and real EEG signals, the proposed method is compared with wavelet generalized likelihood ratio (WGLR) as a well-known signal segmentation method. The simulation results indicate the performance superiority of the proposed method.
Mohaddeseh Dashti; Vali Derhami; Esfandiar Ekhtiyari
Abstract
Yarn tenacity is one of the most important properties in yarn production. This paper addresses modeling of yarn tenacity as well as optimally determining the amounts of the effective inputs to produce yarn with desired tenacity. The artificial neural network is used as a suitable structure for tenacity ...
Read More
Yarn tenacity is one of the most important properties in yarn production. This paper addresses modeling of yarn tenacity as well as optimally determining the amounts of the effective inputs to produce yarn with desired tenacity. The artificial neural network is used as a suitable structure for tenacity modeling of cotton yarn with 30 Ne. As the first step for modeling, the empirical data is collected for cotton yarns. Then, the structure of the neural network is determined and its parameters are adjusted by back propagation method. The efficiency and accuracy of the neural model is measured based on percentage of error as well as coefficient determination. The obtained experimental results show that the neural model could predicate the tenacity with less than 3.5% error. Afterwards, utilizing genetic algorithms, a new method is proposed for optimal determination of input values in yarn production to reach the desired tenacity. We conducted several experiments for different ranges with various production cost functions. The proposed approach could find the best input values to reach the desired tenacity considering the production costs.
Marziea Rahimi; Morteza Zahedi
Abstract
Web search engines are one of the most popular tools on the Internet which are widely-used by expert and novice users. Constructing an adequate query which represents the best specification of users’ information need to the search engine is an important concern of web users. Query expansion is ...
Read More
Web search engines are one of the most popular tools on the Internet which are widely-used by expert and novice users. Constructing an adequate query which represents the best specification of users’ information need to the search engine is an important concern of web users. Query expansion is a way to reduce this concern and increase user satisfaction. In this paper, a new method of query expansion is introduced. This method which is a combination of relevance feedback and latent semantic analysis, finds the relative terms to the topics of user original query based on relevant documents selected by the user in relevance feedback step. The method is evaluated and compared with the Rocchio relevance feedback. The results of this evaluation indicate the capability of the method to better representation of user’s information need and increasing significantly user satisfaction.
Seyed Mojtaba Hosseinirad; S.K. Basu
Abstract
In this paper, we study WSN design, as a multi-objective optimization problem using GA technique. We study the effects of GA parameters including population size, selection and crossover method and mutation probability on the design. Choosing suitable parameters is a trade-off between different network ...
Read More
In this paper, we study WSN design, as a multi-objective optimization problem using GA technique. We study the effects of GA parameters including population size, selection and crossover method and mutation probability on the design. Choosing suitable parameters is a trade-off between different network criteria and characteristics. Type of deployment, effect of network size, radio communication radius, density of sensors in an application area, and location of base station are the WSN’s characteristics investigated here. The simulation results of this study indicate that the value of radio communication radius has direct effect on radio interference, cluster-overlapping, sensor node distribution uniformity, communication energy. The optimal value of radio communication radius is not dependent on network size and type of deployment but on the density of network deployment. Location of the base station affects radio communication energy, cluster-overlapping and average number of communication per cluster head. BS located outside the application domain is preferred over that located inside. In all the network situations, random deployment has better performance compared with grid deployment.
M. M. Fateh; Seyed M. Ahmadi; S. Khorashadizadeh
Abstract
TThe uncertainty estimation and compensation are challenging problems for the robust control of robot manipulators which are complex systems. This paper presents a novel decentralized model-free robust controller for electrically driven robot manipulators. As a novelty, the proposed controller employs ...
Read More
TThe uncertainty estimation and compensation are challenging problems for the robust control of robot manipulators which are complex systems. This paper presents a novel decentralized model-free robust controller for electrically driven robot manipulators. As a novelty, the proposed controller employs a simple Gaussian Radial-Basis-Function Network as an uncertainty estimator. The proposed network includes a hidden layer with one node, two inputs and a single output. In comparison with other model-free estimators such as multilayer neural networks and fuzzy systems, the proposed estimator is simpler, less computational and more effective. The weights of the RBF network are tuned online using an adaptation law derived by stability analysis. Despite the majority of previous control approaches which are the torque-based control, the proposed control design is the voltage-based control. Simulations and comparisons with a robust neural network control approach show the efficiency of the proposed control approach applied on the articulated robot manipulator driven by permanent magnet DC motors.
H.5. Image Processing and Computer Vision
A.M. Shafiee; A. M. Latif
Abstract
Fuzzy segmentation is an effective way of segmenting out objects in images containing both random noise and varying illumination. In this paper, a modified method based on the Comprehensive Learning Particle Swarm Optimization (CLPSO) is proposed for pixel classification in HSI color space by selecting ...
Read More
Fuzzy segmentation is an effective way of segmenting out objects in images containing both random noise and varying illumination. In this paper, a modified method based on the Comprehensive Learning Particle Swarm Optimization (CLPSO) is proposed for pixel classification in HSI color space by selecting a fuzzy classification system with minimum number of fuzzy rules and minimum number of incorrectly classified patterns. In the CLPSO-based method, each individual of the population is considered to automatically generate a fuzzy classification system. Afterwards, a population member tries to maximize a fitness criterion which is high classification rate and small number of fuzzy rules. To reduce the multidimensional search space for an M-class classification problem, centroid of each class is calculated and then fixed in membership function of fuzzy system. The performance of the proposed method is evaluated in terms of future classification within the RoboCup soccer environment with spatially varying illumination intensities on the scene. The results present 85.8% accuracy in terms of classification.
H.3.10. Robotics
M. M. Fateh; M. Baluchzadeh
Abstract
This paper proposes a discrete-time repetitive optimal control of electrically driven robotic manipulators using an uncertainty estimator. The proposed control method can be used for performing repetitive motion, which covers many industrial applications of robotic manipulators. This kind of control ...
Read More
This paper proposes a discrete-time repetitive optimal control of electrically driven robotic manipulators using an uncertainty estimator. The proposed control method can be used for performing repetitive motion, which covers many industrial applications of robotic manipulators. This kind of control law is in the class of torque-based control in which the joint torques are generated by permanent magnet dc motors in the current mode. The motor current is regulated using a proportional-integral controller. The novelty of this paper is a modification in using the discrete-time linear quadratic control for the robot manipulator, which is a nonlinear uncertain system. For this purpose, a novel discrete linear time-variant model is introduced for the robotic system. Then, a time-delay uncertainty estimator is added to the discrete-time linear quadratic control to compensate the nonlinearity and uncertainty associated with the model. The proposed control approach is verified by stability analysis. Simulation results show the superiority of the proposed discrete-time repetitive optimal control over the discrete-time linear quadratic control.
F.2.7. Optimization
M. Mohammadpour; H. Parvin; M. Sina
Abstract
Many of the problems considered in optimization and learning assume that solutions exist in a dynamic. Hence, algorithms are required that dynamically adapt with the problem’s conditions and search new conditions. Mostly, utilization of information from the past allows to quickly adapting changes ...
Read More
Many of the problems considered in optimization and learning assume that solutions exist in a dynamic. Hence, algorithms are required that dynamically adapt with the problem’s conditions and search new conditions. Mostly, utilization of information from the past allows to quickly adapting changes after. This is the idea underlining the use of memory in this field, what involves key design issues concerning the memory content, the process of update, and the process of retrieval. In this article, we used chaotic genetic algorithm (GA) with memory for solving dynamic optimization problems. A chaotic system has much more accurate prediction of the future rather than random system. The proposed method used a new memory with diversity maximization. Here we proposed a new strategy for updating memory and retrieval memory. Experimental study is conducted based on the Moving Peaks Benchmark to test the performance of the proposed method in comparison with several state-of-the-art algorithms from the literature. Experimental results show superiority and more effectiveness of the proposed algorithm in dynamic environments.
F.2.7. Optimization
E. Khodayari; V. Sattari-Naeini; M. Mirhosseini
Abstract
Developing optimal flocking control procedure is an essential problem in mobile sensor networks (MSNs). Furthermore, finding the parameters such that the sensors can reach to the target in an appropriate time is an important issue. This paper offers an optimization approach based on metaheuristic methods ...
Read More
Developing optimal flocking control procedure is an essential problem in mobile sensor networks (MSNs). Furthermore, finding the parameters such that the sensors can reach to the target in an appropriate time is an important issue. This paper offers an optimization approach based on metaheuristic methods for flocking control in MSNs to follow a target. We develop a non-differentiable optimization technique based on the gravitational search algorithm (GSA). Finding flocking parameters using swarm behaviors is the main contributing of this paper to minimize the cost function. The cost function displays the average of Euclidean distance of the center of mass (COM) away from the moving target. One of the benefits of using GSA is its application in multiple targets tracking with satisfying results. Simulation results indicate that this scheme outperforms existing ones and demonstrate the ability of this approach in comparison with the previous methods.
M. Ilbeygi; M.R. Kangavari
Abstract
The increasing use of unmanned aerial vehicles (UAVs) or drones in different civil and military operations has attracted attention of many researchers and science communities. One of the most notable challenges in this field is supervising and controlling a group or a team of UAVs by a single user. Thereupon, ...
Read More
The increasing use of unmanned aerial vehicles (UAVs) or drones in different civil and military operations has attracted attention of many researchers and science communities. One of the most notable challenges in this field is supervising and controlling a group or a team of UAVs by a single user. Thereupon, we proposed a new intelligent adaptive interface (IAI) to overcome to this challenge. Our IAI not only is empowered by comprehensive IAI architecture but also has some notable features like: presenting single-display user interface for controlling UAV team, leveraging user cognitive model to deliver right information at the right time, supporting the user by system behavior explanation, guiding and helping the user to choose right decisions. Finally, we examined the IAI with contributing eleven volunteers and in three different scenarios. Obtained results have shown the power of the proposed IAI to reduce workload and to increase user's situation awareness level and as a result to promote mission completion percentage.