Original/Review Paper
H.6.5.2. Computer vision
M. Karami; A. Moosavie nia; M. Ehsanian
Abstract
In this paper we address the problem of automatic arrangement of cameras in a 3D system to enhance the performance of depth acquisition procedure. Lacking ground truth or a priori information, a measure of uncertainty is required to assess the quality of reconstruction. The mathematical model of iso-disparity ...
Read More
In this paper we address the problem of automatic arrangement of cameras in a 3D system to enhance the performance of depth acquisition procedure. Lacking ground truth or a priori information, a measure of uncertainty is required to assess the quality of reconstruction. The mathematical model of iso-disparity surfaces provides an efficient way to estimate the depth estimation uncertainty which is believed to be related to the baseline length, focal length, panning angle and the pixel resolution in a stereo vision system. Accordingly, we first present analytical relations for fast estimation of the embedded uncertainty in depth acquisition and then these relations, along with the 3D sampling arrangement are employed to define a cost function. The optimal camera arrangement will be determined by minimizing the cost function with respect to the system parameters and the required constraints. Finally, the proposed algorithm is implemented on some 3D models. The simulation results demonstrate significant improvement (up to 35%) in depth uncertainty in the achieved depth maps compared with the traditional rectified camera setup.
Original/Review Paper
F.4. Probability and Statistics
Z. Shaeiri; M. R. Karami; A. Aghagolzadeh
Abstract
Sufficient number of linear and noisy measurements for exact and approximate sparsity pattern/support set recovery in the high dimensional setting is derived. Although this problem as been addressed in the recent literature, there is still considerable gaps between those results and the exact limits ...
Read More
Sufficient number of linear and noisy measurements for exact and approximate sparsity pattern/support set recovery in the high dimensional setting is derived. Although this problem as been addressed in the recent literature, there is still considerable gaps between those results and the exact limits of the perfect support set recovery. To reduce this gap, in this paper, the sufficient condition is enhanced. A specific form of a Joint Typicality decoder is used for the support recovery task. Two performance metrics are considered for the recovery validation; one, which considers exact support recovery, and the other which seeks partial support recovery. First, an upper bound is obtained on the error probability of the sparsity pattern recovery. Next, using the mentioned upper bound, sufficient number of measurements for reliable support recovery is derived. It is shown that the sufficient condition for reliable support recovery depends on three key parameters of the problem; the noise variance, the minimum nonzero entry of the unknown sparse vector and the sparsity level. Simulations are performed for different sparsity rate, different noise variances, and different distortion levels. The results show that for all the mentioned cases the proposed methodology increases convergence rate of upper bound of the error probability of support recovery significantly which leads to a lower error probability bound compared with previously proposed bounds.
Original/Review Paper
D. Data
M. Hajizadeh-Tahan; M. Ghasemzadeh
Abstract
Learning models and related results depend on the quality of the input data. If raw data is not properly cleaned and structured, the results are tending to be incorrect. Therefore, discretization as one of the preprocessing techniques plays an important role in learning processes. The most important ...
Read More
Learning models and related results depend on the quality of the input data. If raw data is not properly cleaned and structured, the results are tending to be incorrect. Therefore, discretization as one of the preprocessing techniques plays an important role in learning processes. The most important challenge in the discretization process is to reduce the number of features’ values. This operation should be applied in a way that relationships between the features are maintained and accuracy of the classification algorithms would increase. In this paper, a new evolutionary multi-objective algorithm is presented. The proposed algorithm uses three objective functions to achieve high-quality discretization. The first and second objectives minimize the number of selected cut points and classification error, respectively. The third objective introduces a new criterion called the normalized cut, which uses the relationships between their features’ values to maintain the nature of the data. The performance of the proposed algorithm was tested using 20 benchmark datasets. According to the comparisons and the results of nonparametric statistical tests, the proposed algorithm has a better performance than other existing major methods.
Original/Review Paper
H.6.5.13. Signal processing
F. Sabahi
Abstract
Frequency control is one of the key parts for the arrangement of the performance of a microgrid (MG) system. Theoretically, model-based controllers may be the ideal control mechanisms; however, they are highly sensitive to model uncertainties and have difficulty with preserving robustness. The presence ...
Read More
Frequency control is one of the key parts for the arrangement of the performance of a microgrid (MG) system. Theoretically, model-based controllers may be the ideal control mechanisms; however, they are highly sensitive to model uncertainties and have difficulty with preserving robustness. The presence of serious disturbances, the increasing number of MG, varying voltage supplies of MGs, and both independent operations of MGs and their interaction with the main grid makes the design of model-based frequency controllers for MGs become inherently challenging and problematic. This paper proposes an approach that takes advantage of interval Type II fuzzy logic for modeling an MG system in the process of its robust H∞ frequency control. Specifically, the main contribution of this paper is that the parameters of the MG system are modeled by interval Type-II fuzzy system (IT2FS), and simultaneously MG deals with perturbation using H∞ index to control its frequency. The performance of the microgrid equipped with the proposed modeling and controller is then compared with the other controllers such as H2 and μ-synthesis during changes in the microgrid parameters and occurring perturbations. The comparison shows the superiority and effectiveness of the proposed approach in terms of robustness against uncertainties in the modeling parameters and perturbations.
Original/Review Paper
H.6.2.2. Fuzzy set
N. Mohammadkarimi; V. Derhami
Abstract
This paper proposes fuzzy modeling using obtained data. Fuzzy system is known as knowledge-based or rule-bases system. The most important part of fuzzy system is rule-base. One of problems of generation of fuzzy rule with training data is inconsistence data. Existence of inconsistence and uncertain states ...
Read More
This paper proposes fuzzy modeling using obtained data. Fuzzy system is known as knowledge-based or rule-bases system. The most important part of fuzzy system is rule-base. One of problems of generation of fuzzy rule with training data is inconsistence data. Existence of inconsistence and uncertain states in training data causes high error in modeling. Here, Probability fuzzy system presents to improvement the above challenge. A zero order Sugeno fuzzy model used as fuzzy system structure. At first by using clustering obtains the number of rules and input membership functions. A set of candidate amounts for consequence parts of fuzzy rules is considered. Considering each pair of training data, according which rules fires and what is the output in the pair, the amount of probability of consequences candidates are change. In the next step, eligibility probability of each consequence candidate for all rules is determined. Finally, using these obtained probability, two probable outputs is generate for each input. The experimental results show superiority of the proposed approach rather than some available well-known approaches that makes reduce the number of rule and reduce system complexity.
Original/Review Paper
H.3. Artificial Intelligence
A. Moradi; A. Abdi Seyedkolaei; Seyed A. Hosseini
Abstract
Software defined network is a new computer network architecture who separates controller and data layer in network devices such as switches and routers. By the emerge of software defined networks, a class of location problems, called controller placement problem, has attracted much more research attention. ...
Read More
Software defined network is a new computer network architecture who separates controller and data layer in network devices such as switches and routers. By the emerge of software defined networks, a class of location problems, called controller placement problem, has attracted much more research attention. The task in the problem is to simultaneously find optimal number and location of controllers satisfying a set of routing and capacity constraints. In this paper, we suggest an effective solution method based on the so-called Iterated Local Search (ILS) strategy. We then, compare our method to an existing standard mathematical programming solver on an extensive set of problem instances. It turns out that our suggested method is computationally much more effective and efficient over middle to large instances of the problem.
Original/Review Paper
H.5. Image Processing and Computer Vision
A. Azimzadeh Irani; R. Pourgholi
Abstract
Ray Casting is a direct volume rendering technique for visualizing 3D arrays of sampled data. It has vital applications in medical and biological imaging. Nevertheless, it is inherently open to cluttered classification results. It suffers from overlapping transfer function values and lacks a sufficiently ...
Read More
Ray Casting is a direct volume rendering technique for visualizing 3D arrays of sampled data. It has vital applications in medical and biological imaging. Nevertheless, it is inherently open to cluttered classification results. It suffers from overlapping transfer function values and lacks a sufficiently powerful voxel parsing mechanism for object distinction. In this work, we are proposing an image processing based approach towards enhancing ray casting technique for object distinction process. The rendering mode is modified to accommodate masking information generated by a K-means based hybrid segmentation algorithm. An effective set of image processing techniques are creatively employed in construction of a generic segmentation system capable of generating object membership information.
Original/Review Paper
H.3. Artificial Intelligence
S. Adeli; P. Moradi
Abstract
Since, most of the organizations present their services electronically, the number of functionally-equivalent web services is increasing as well as the number of users that employ those web services. Consequently, plenty of information is generated by the users and the web services that lead to the users ...
Read More
Since, most of the organizations present their services electronically, the number of functionally-equivalent web services is increasing as well as the number of users that employ those web services. Consequently, plenty of information is generated by the users and the web services that lead to the users be in trouble in finding their appropriate web services. Therefore, it is required to provide a recommendation method for predicting the quality of web services (QoS) and recommending web services. Most of the existing collaborative filtering approaches don’t operate efficiently in recommending web services due to ignoring some effective factors such as dependency among users/web services, the popularity of users/web services, and the location of web services/users. In this paper, a web service recommendation method called Popular-Dependent Collaborative Filtering (PDCF) is proposed. The proposed method handles QoS differences experienced by the users as well as the dependency among users on a specific web service using the user/web service dependency factor. Additionally, the user/web service popularity factor is considered in the PDCF that significantly enhances its effectiveness. We also proposed a location-aware method called LPDCF which considers the location of web services into the recommendation process of the PDCF. A set of experiments is conducted to evaluate the performance of the PDCF and investigating the impression of the matrix factorization model on the efficiency of the PDCF with two real-world datasets. The results indicate that the PDCF outperforms other competing methods in most cases.
Original/Review Paper
H.6.5.10. Remote sensing
M. Imani
Abstract
Due to abundant spectral information contained in the hyperspectral images, they are suitable data for anomalous targets detection. The use of spatial features in addition to spectral ones can improve the anomaly detection performance. An anomaly detector, called nonparametric spectral-spatial detector ...
Read More
Due to abundant spectral information contained in the hyperspectral images, they are suitable data for anomalous targets detection. The use of spatial features in addition to spectral ones can improve the anomaly detection performance. An anomaly detector, called nonparametric spectral-spatial detector (NSSD), is proposed in this work which utilizes the benefits of spatial features and local structures extracted by the morphological filters. The obtained spectral-spatial hypercube has high dimensionality. So, accurate estimates of the background statistics in small local windows may not be obtained. Applying conventional detectors such as Local Reed Xiaoli (RX) to the high dimensional data is not possible. To deal with this difficulty, a nonparametric distance, without any need to estimate the data statistics, is used instead of the Mahalanobis distance. According to the experimental results, the detection accuracy improvement of the proposed NSSD method compared to Global RX, Local RX, weighted RX, linear filtering based RX (LF-RX), background joint sparse representation detection (BJSRD), Kernel RX, subspace RX (SSRX) and RX and uniform target detector (RX-UTD) in average is 47.68%, 27.86%, 13.23%, 29.26%, 3.33%, 17.07%, 15.88%, and 44.25%, respectively.
Original/Review Paper
H.7. Simulation, Modeling, and Visualization
A.R. Ebrahimi; Gh. Barid Loghmani; M. Sarfraz
Abstract
In this paper, a new technique has been designed to capture the outline of 2D shapes using cubic B´ezier curves. The proposed technique avoids the traditional method of optimizing the global squared fitting error and emphasizes the local control of data points. A maximum error has been determined ...
Read More
In this paper, a new technique has been designed to capture the outline of 2D shapes using cubic B´ezier curves. The proposed technique avoids the traditional method of optimizing the global squared fitting error and emphasizes the local control of data points. A maximum error has been determined to preserve the absolute fitting error less than a criterion and it administers the process of curve subdivision. Depending on the specified maximum error, the proposed technique itself subdivides complex segments, and curve fitting is done simultaneously. A comparative study of experimental results embosses various advantages of the proposed technique such as accurate representation, low approximation errors and efficient computational complexity.
Original/Review Paper
A.1. General
S. Asadi Amiri
Abstract
Removing salt and pepper noise is an active research area in image processing. In this paper, a two-phase method is proposed for removing salt and pepper noise while preserving edges and fine details. In the first phase, noise candidate pixels are detected which are likely to be contaminated by noise. ...
Read More
Removing salt and pepper noise is an active research area in image processing. In this paper, a two-phase method is proposed for removing salt and pepper noise while preserving edges and fine details. In the first phase, noise candidate pixels are detected which are likely to be contaminated by noise. In the second phase, only noise candidate pixels are restored using adaptive median filter. In terms of noise detection, a two-stage method is utilized. At first, a thresholding is applied on the image to initial estimation of the noise candidate pixels. Since some pixels in the image may be similar to the salt and pepper noise, these pixels are mistakenly identified as noise. Hence, in the second step of the noise detection, the pixon-based segmentation is used to identify the salt and pepper noise pixels more accurately. Pixon is the neighboring pixels with similar gray levels. The proposed method was evaluated on several noisy images, and the results show the accuracy of the proposed method in salt and pepper noise removal and outperforms to several existing methods.
Original/Review Paper
H.3. Artificial Intelligence
S. Roohollahi; A. Khatibi Bardsiri; F. Keynia
Abstract
Social networks are streaming, diverse and include a wide range of edges so that continuously evolves over time and formed by the activities among users (such as tweets, emails, etc.), where each activity among its users, adds an edge to the network graph. Despite their popularities, the dynamicity and ...
Read More
Social networks are streaming, diverse and include a wide range of edges so that continuously evolves over time and formed by the activities among users (such as tweets, emails, etc.), where each activity among its users, adds an edge to the network graph. Despite their popularities, the dynamicity and large size of most social networks make it difficult or impossible to study the entire network. This paper proposes a sampling algorithm that equipped with an evaluator unit for analyzing the edges and a set of simple fixed structure learning automata. Evaluator unit evaluates each edge and then decides whether edge and corresponding node should be added to the sample set. In The proposed algorithm, each main activity graph node is equipped with a simple learning automaton. The proposed algorithm is compared with the best current sampling algorithm that was reported in the Kolmogorov-Smirnov test (KS) and normalized L1 and L2 distances in real networks and synthetic networks presented as a sequence of edges. Experimental results show the superiority of the proposed algorithm.