[1] G. Chandrashekar and F. Sahin, “A survey on feature selection methods,” Computers & Electrical Engineering, vol. 40, no. 1, pp. 16–28, 2014.
[2] I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, vol. 3, pp. 1157–1182, 2003.
[3] R. Kohavi and G. H. John, “Wrappers for feature subset selection,” Artificial Intelligence, vol. 97, no. 1, pp. 273–324, 1997.
[4] S. A. Ali Shah, H. M. Shabbir, S. U. Rehman, and M. Waqas, “A comparative study of feature selection approaches: 2016-2020,” International Journal of Scientific & Engineering Research, vol. 11, no. 2, pp. 469–478, 2020.
[5] J. Han, M. Kamber, and J. Pei, Data Mining, 3rd ed., Morgan Kaufmann/Elsevier, Waltham, MA, 2012.
[6] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines.” Machine Learning, vol. 46, no. 1-3, pp. 389-422, 2002.
https://doi.org/10.1023/A:1012487302797.
[7] F. Song, Z. Guo, and D. Mei, “Feature selection using principal component analysis,” in 2010 International Conference on System Science, Engineering Design and Manufacturing Informatization, vol. 1, pp. 27–30, 2010.
[8] G. R. Naik, Advances in Principal Component Analysis: Research and Development. Springer, 2019.
[9] H. Zhang, “The optimality of Naïve Bayes,” in V. Barr and Z. Markov, Eds., Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference, Miami Beach, FL, USA: AAAI Press, pp. 562–567, 2004.
[10] E. O. Omuya, G. O. Okeyo, and M. W. Kimwele, “Feature selection for classification using principal component analysis and information gain,” Expert Systems with Applications, vol. 174, p. 114765, 2021.
[11] S. Kashef, H. Nezamabadi-Pour, and B. Nikpour, “Multilabel feature selection: A comprehensive review and guiding experiments,” WIREs Data Mining and Knowledge Discovery, vol. 8, no. 2, 2018.
[12] S. Solorio-Fernández, J. A. Carrasco-Ochoa, and J. F. Martínez-Trinidad, “A review of unsupervised feature selection methods,” Artificial Intelligence Review, vol. 53, no. 2, pp. 907–948, 2019.
[13] J. Tang, S. Alelyani, and H. Liu, “Feature selection for classification: A review,” Chapman and Hall/CRC, pp. 37–64, 2014 (Copyright © 2015 by Taylor & Francis Group, LLC).
[14] W. Zheng et al., “Multifeature based network revealing the structural abnormalities in autism spectrum disorder,” IEEE Transactions on Affective Computing, vol. 12, no. 3, pp. 732–742, 2021.
[15] R. Sheikhpour, M. A. Sarram, S. Gharaghani, and M. A. Z. Chahooki, “A survey on semi-supervised feature selection methods,” Pattern Recognition, vol. 64, pp. 141–158, 2017.
[16] J. C. Ang, A. Mirzal, H. Haron, and H. N. A. Hamed, “Supervised, unsupervised, and semi-supervised feature selection: a review on gene selection,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 13, no. 5, pp. 971–989, 2016.
[17] T. A. Alhaj et al., “Feature selection using information gain for improved structural-based alert correlation,” PLOS ONE, vol. 11, no. 11, p. e0166017, 2016.
[18] Z. Zhao, L. Wang, and H. Liu, “Efficient spectral feature selection with minimum redundancy,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 24, no. 1, pp. 673–678, 2010.
[19] D. Cai, C. Zhang, and X. He, “Unsupervised feature selection for multi-cluster data,” in Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10, ACM, 2010.
[20] J. Nobre and R. F. Neves, “Combining principal component analysis, discrete wavelet transform and XGBoost to trade in the financial markets,” Expert Systems with Applications, vol. 125, pp. 181–194, 2019.
[21] P. Sanguansat, Ed., Principal Component Analysis. IntechOpen: Rijeka, 2012.
[22] M. A. Jabri, “High Performance Principal Component Analysis with ParAL,” Neuromorphic LLC, Oct. 1998.
[23] K. Song, B. Zhang, W. Li, L. Yan, and X. Wang, “Research on parallel principal component analysis based on ternary optical computer,” Optik, vol. 241, p. 167176, 2021.
[24] N. Funatsu and Y. Kuroki, “Fast parallel processing using GPU in computing L1-PCA bases,” in TENCON 2010-2010 IEEE Region 10 Conference, Fukuoka, Japan, Nov. 2010, pp. 2087-2090.
[25] D. A. Ross, J. Lim, R. S. Lin, and M. H. Yang, “Incremental learning for robust visual tracking,” International Journal of Computer Vision, vol. 77, pp. 125-141, 2008.
[26] N. Halko, P. G. Martinsson, and J. A. Tropp, “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,” SIAM Review, vol. 53, no. 2, pp. 217-288, 2011.
[27] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
[28] T. Hastie, The Elements of Statistical Learning, Springer Series in Statistics, 2nd ed., Springer, New York, NY, 2017.
[29] A. I. Pratiwi and Adiwijaya, “On the feature selection and classification based on information gain for document sentiment analysis,” Applied Computational Intelligence and Soft Computing, vol. 2018, pp. 1–5, 2018.
[30] X. Xu, H. Gu, Y. Wang, J. Wang, and P. Qin, “Autoencoder based feature selection method for classification of anticancer drug response,” Frontiers in Genetics, vol. 10, 2019.
[31] A. Zien, N. Krämer, S. Sonnenburg, and G. Rätsch, “The feature importance ranking measure,” in Advances in Neural Information Processing Systems, vol. 21, pp. 694–709, Springer Berlin Heidelberg, 2009.
[32] I. Kamkar, S. K. Gupta, D. Phung, and S. Venkatesh, “Exploiting feature relationships towards stable feature selection,” in 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), vol. 37, IEEE, pp. 1–10, 2015.
[33] W. W. B. Goh and L. Wong, “Evaluating feature-selection stability in next-generation proteomics,” Journal of Bioinformatics and Computational Biology, vol. 14, no. 05, p. 1650029, 2016.
[34] S. Raghavendra and M. Indiramma, “Hybrid data mining model for the classification and prediction of medical datasets,” International Journal of Knowledge Engineering and Soft Data Paradigms, vol. 5, no. 3/4, p. 262, 2016.
[35] B. Xin, L. Hu, Y. Wang, and W. Gao, “Stable feature selection from brain sMRI,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29, no. 1, 2015.
[36] A. Mehrabinezhad, M. Teshnelab, and A. Sharifi, “Autoencoder-PCA-based Online Supervised Feature Extraction-Selection Approach,” Journal of AI and Data Mining, vol. 11, no. 4, pp. 525-534, 2023. doi: 10.22044/jadm.2023.12436.2390.
[37] R. Adhao and V. Pachghare, “Feature selection using principal component analysis and genetic algorithm,” Journal of Discrete Mathematical Sciences and Cryptography, vol. 23, no. 2, pp. 595–602, 2020.
[38] I. T. Jolliffe, Principal Component Analysis for Special Types of Data, Springer New York, New York, NY, pp. 338–372, 2002.
[39] L. Peterson, “K-nearest neighbor,” Scholarpedia, vol. 4, no. 2, p. 1883, 2009.
[41] H. Nosrati Nahook, and M. Eftekhari, “A new method for feature selection based on fuzzy logic.” Computational Intelligence in Electrical Engineering, vol. 4, no. 1, pp. 71-84, 2013.
[42] E. Namsrai, T. Munkhdalai, M. Li, J. H. Shin, O. E. Namsrai, and K. H. Ryu, “A feature selection-based ensemble method for arrhythmia classification,” Journal of Information Processing Systems, vol. 9, no. 1, pp. 31-40, 2013.
[43] R. Jain, P. R. Betrabet, B. A. Rao, and N. S. Reddy, “Classification of cardiac arrhythmia using improved feature selection methods and ensemble classifiers,” in Journal of Physics: Conference Series, vol. 2161, no. 1, p. 012003, 2022, IOP Publishing.
[44] M. Tunç and G. B. Cangöz, “Classification of the cardiac arrhythmia using combined feature selection algorithms,” Turkish Journal of Science and Technology, vol. 19, no. 1, pp. 147-159, 2024.
[45] Liang, C. F. Tsai, and H. T. Wu, “The effect of feature selection on financial distress prediction,” Knowledge-Based Systems, vol. 73, pp. 289-297, 2015.
[46] Y. Zhou, M. Shamsu Uddin, T. Habib, G. Chi, and K. Yuan, “Feature selection in credit risk modeling: an international evidence,” Economic Research-Ekonomska Istraživanja, vol. 34, no. 1, pp. 3064-3091, 2021.
[47] A. Rouhi and H. Nezamabadi-Pour, “A hybrid-based feature selection method for high-dimensional data using ensemble methods,” Iranian Journal of Electrical and Computer Engineering, vol. 60, no. 4, p. 283, 2018.
[48] M. A. Rahman and R. C. Muniyandi, “Feature selection from colon cancer dataset for cancer classification using artificial neural network,” International Journal of Advanced Science, Engineering and Information Technology, vol. 8, no. 4-2, pp. 1387-1393, 2018.
[49] O. O. Petinrin, F. Saeed, N. Salim, M. Toseef, Z. Liu, and I. O. Muyide, “Dimension reduction and classifier-based feature selection for oversampled gene expression data and cancer classification,” Processes, vol. 11, no. 7, p. 1940, 2023.
[50] S. DeepaLakshmi and T. Velmurugan, “Benchmarking attribute selection techniques for microarray data,” ARPN Journal of Engineering and Applied Sciences, vol. 13, no. 11, pp. 3740-3748, 2018.
[51] C. De Stefano, F. Fontanella, and A. Scotto di Freca, “Feature selection in high dimensional data by a filter-based genetic algorithm,” in Applications of Evolutionary Computation: 20th European Conference, EvoApplications 2017, Amsterdam, The Netherlands, Apr. 19-21, 2017, Proceedings, Part I 20, pp. 506-521, Springer International Publishing.
[52] A. Mehrabinezhad, M. Teshnelab, and A. Sharifi, “Autoencoder-PCA-based online supervised feature extraction-selection approach,” Journal of AI and Data Mining, vol. 11, no. 4, pp. 525-534, 2023.
[53] K. Yang, Z. Cai, J. Li, and G. Lin, “A stable gene selection in microarray data analysis,” BMC Bioinformatics, vol. 7, pp. 1-16, 2006.
[54] F. H. Yağın, Z. Küçükakçalı, İ. B. Çiçek, and H. G. Bağ, “The effects of variable selection and dimension reduction methods on the classification model in the small round blue cell tumor dataset,” Middle Black Sea Journal of Health Science, vol. 7, no. 3, pp. 390-396, 2021.
[55] M. Hamim, I. El Mouden, M. Ouzir, H. Moutachaouik, and M. Hain, “A novel dimensionality reduction approach to improve microarray data classification,” IIUM Engineering Journal, vol. 22, no. 1, pp. 1-22, 2021.
[56] S. Karimi and M. Farrokhnia, “Leukemia and small round blue-cell tumor cancer detection using microarray gene expression data set: Combining data dimension reduction and variable selection technique,” Chemometrics and Intelligent Laboratory Systems, vol. 139, pp. 6-14, 2014.
[57] L. Y. Chuang, C. H. Ke, and C. H. Yang, “A hybrid both filter and wrapper feature selection method for microarray classification,” arXiv preprint arXiv:1612.08669, 2016.
[58] S. J. Susmi, H. K. Nehemiah, and A. Kannan, “Hybrid dimension reduction techniques with genetic algorithm and neural network for classifying leukemia gene expression data,” Indian Journal of Science and Technology, vol. 9, no. 1, pp. 1-8, 2016.
[59] T. K. Abuya, “Lung cancer prediction from Elvira biomedical dataset using ensemble classifier with principal component analysis,” Journal of Data Analysis and Information Processing, vol. 11, no. 2, pp. 175-199, 2023.
[60] D. Powers, “Evaluation: From precision, recall and F-measure to ROC, informedness markedness & correlation,” Journal of Machine Learning Technologies, vol. 2, no. 1, pp. 37–63, 2011.