H.6. Pattern Recognition
Kh. Sadatnejad; S. Shiry Ghidari; M. Rahmati
Abstract
Abstract- Kernel trick and projection to tangent spaces are two choices for linearizing the data points lying on Riemannian manifolds. These approaches are used to provide the prerequisites for applying standard machine learning methods on Riemannian manifolds. Classical kernels implicitly project data ...
Read More
Abstract- Kernel trick and projection to tangent spaces are two choices for linearizing the data points lying on Riemannian manifolds. These approaches are used to provide the prerequisites for applying standard machine learning methods on Riemannian manifolds. Classical kernels implicitly project data to high dimensional feature space without considering the intrinsic geometry of data points. Projection to tangent spaces truly preserves topology along radial geodesics. In this paper, we propose a method for extrinsic inference on Riemannian manifold using kernel approach while topology of the entire dataset is preserved. We show that computing the Gramian matrix using geodesic distances, on a complete Riemannian manifold with unique minimizing geodesic between each pair of points, provides a feature mapping which preserves the topology of data points in the feature space. The proposed approach is evaluated on real datasets composed of EEG signals of patients with two different mental disorders, texture, visual object classes, and tracking datasets. To assess the effectiveness of our scheme, the extracted features are examined by other state-of-the-art techniques for extrinsic inference over symmetric positive definite (SPD) Riemannian manifold. Experimental results show the superior accuracy of the proposed approach over approaches which use kernel trick to compute similarity on SPD manifolds without considering the topology of dataset or partially preserving topology.