A. Nozaripour; H. Soltanizadeh
Abstract
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we ...
Read More
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we had presented in our previous work, a new and robust method against rotation is introduced for dorsal hand vein recognition. In this method, to select the ROI, by changing the length and angle of the sides, undesirable effects of hand rotation during taking images have largely been neutralized. So, depending on the amount of hand rotation, ROI in each image will be different in size and shape. On the other hand, because of the same direction distribution on the dorsal hand vein patterns, we have used the kernel trick on sparse representation to classification. As a result, most samples with different classes but the same direction distribution will be classified properly. Using these two techniques, lead to introduce an effective method against hand rotation, for dorsal hand vein recognition. Increases of 2.26% in the recognition rate is observed for the proposed method when compared to the three conventional SRC-based algorithms and three classification methods based sparse coding that used dictionary learning.
H.6. Pattern Recognition
Kh. Sadatnejad; S. Shiry Ghidari; M. Rahmati
Abstract
Abstract- Kernel trick and projection to tangent spaces are two choices for linearizing the data points lying on Riemannian manifolds. These approaches are used to provide the prerequisites for applying standard machine learning methods on Riemannian manifolds. Classical kernels implicitly project data ...
Read More
Abstract- Kernel trick and projection to tangent spaces are two choices for linearizing the data points lying on Riemannian manifolds. These approaches are used to provide the prerequisites for applying standard machine learning methods on Riemannian manifolds. Classical kernels implicitly project data to high dimensional feature space without considering the intrinsic geometry of data points. Projection to tangent spaces truly preserves topology along radial geodesics. In this paper, we propose a method for extrinsic inference on Riemannian manifold using kernel approach while topology of the entire dataset is preserved. We show that computing the Gramian matrix using geodesic distances, on a complete Riemannian manifold with unique minimizing geodesic between each pair of points, provides a feature mapping which preserves the topology of data points in the feature space. The proposed approach is evaluated on real datasets composed of EEG signals of patients with two different mental disorders, texture, visual object classes, and tracking datasets. To assess the effectiveness of our scheme, the extracted features are examined by other state-of-the-art techniques for extrinsic inference over symmetric positive definite (SPD) Riemannian manifold. Experimental results show the superior accuracy of the proposed approach over approaches which use kernel trick to compute similarity on SPD manifolds without considering the topology of dataset or partially preserving topology.