Document Type : Review Article

Authors

Department of Computer and Information Technology Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran.

Abstract

This paper provides a comprehensive review of the potential of game theory as a solution for sensor-based human activity recognition (HAR) challenges. Game theory is a mathematical framework that models interactions between multiple entities in various fields, including economics, political science, and computer science. In recent years, game theory has been increasingly applied to machine learning challenges, including HAR, as a potential solution to improve recognition performance and efficiency of recognition algorithms. The review covers the shared challenges between HAR and machine learning, compares previous work on traditional approaches to HAR, and discusses the potential advantages of using game theory. It discusses different game theory approaches, including non-cooperative and cooperative games, and provides insights into how they can improve the HAR systems. The authors propose new game theory-based approaches and evaluate their effectiveness compared to traditional approaches. Overall, this review paper contributes to expanding the scope of research in HAR by introducing game-theoretic concepts and solutions to the field and provides valuable insights for researchers interested in applying game-theoretic approaches to HAR.

Keywords

Main Subjects

[1] T. Subetha and S. Chitrakala, “A survey on human activity recognition from videos,” in 2016 International Conference on Information Communication and Embedded Systems, ICICES 2016, Institute of Electrical and Electronics Engineers Inc., Jul. 2016.
[2] E. Cippitelli, S. Gasparrini, E. Gambi, and S. Spinsante, “A Human Activity Recognition System Using Skeleton Data from RGBD Sensors,” Comput. Intell. Neurosci., vol. 2016, 2016.
[3] Y. Gu, F. Ren, and J. Li, “PAWS: Passive Human Activity Recognition Based on WiFi Ambient Signals,” IEEE Internet Things J., vol. 3, no. 5, pp. 796–805, Oct. 2016.
[4] D. Sun, J. Zhang, S. Zhang, X. Li, and H. Wang, “Human Health Activity Recognition Algorithm in Wireless Sensor Networks Based on Metric Learning,” Comput. Intell. Neurosci., vol. 2022, pp. 1–9, Apr. 2022.
[5] M. Fujiwara, Y. Kashimoto, M. Fujimoto, H. Suwa, Y. Arakawa, and K. Yasumoto, “Implementation and Evaluation of Analog-PIR-Sensor-Based Activity Recognition,” https://doi.org/10.9746/jcmsi.10.385, vol. 10, no. 5, pp. 385–392, Sep. 2021.
[6] B. Sefen, S. Baumbach, A. Dengel, and S. Abdennadher, “Human activity recognition using sensor data of smartphones and smartwatches,” in ICAART 2016 - Proceedings of the 8th International Conference on Agents and Artificial Intelligence, SciTePress, 2016, pp. 488–493.
[7] K. Chen, D. Zhang, L. Yao, B. Guo, Z. Yu, and Y. Liu, “Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities,” ACM Comput. Surv., vol. 54, no. 4, 2021.
[8] J. Ujwala Rekha, K. Shahu Chatrapati, and A. Vinaya Babu, “Game theory and its applications in machine learning,” Adv. Intell. Syst. Comput., vol. 435, pp. 195–207, 2016.
[9] J. Hamilton, “Game theory: Analysis of conflict, by Myerson, R. B., Cambridge: Harvard University Press,” Manag. Decis. Econ., vol. 13, no. 4, pp. 369–369, 1992.
[10] S. M. Darwish, H. Saheb, and A. Eltholth, “A new game theory approach for noise reduction in cognitive radio network,” ICENCO 2018 - 14th Int. Comput. Eng. Conf. Secur. Smart Soc., no. 2, pp. 84–89, 2019.
[11] S. Cohen, G. Dror, and E. Ruppin, “Feature selection via coalitional game theory,” Neural Comput., vol. 19, no. 7, pp. 1939–1961, 2007.
[12] J. C. Quiroz, A. Banerjee, S. M. Dascalu, and S. L. Lau, “Feature Selection for Activity Recognition from Smartphone Accelerometer Data,” Chang. Publ. TSI Press, pp. 1–9, Jul. 2017.
[13] B. A. Almogahed and I. A. Kakadiaris, “NEATER: filtering of over-sampled data using non-cooperative game theory,” Soft Comput., vol. 19, no. 11, pp. 3301–3322, Nov. 2015.
[14] T. Hazra and K. Anjaria, “Applications of game theory in deep learning: a survey,” Multimed. Tools Appl., vol. 81, no. 6, pp. 8963–8994, Mar. 2022.
[15] D. Guan, T. Ma, W. Yuan, Y. K. Lee, and A. Jehad Sarkar, “Review of sensor-based activity recognition systems,” IETE Tech. Rev. (Institution Electron. Telecommun. Eng. India), vol. 28, no. 5, pp. 418–433, Sep. 2011.
[16] J. Donahue et al., “Long-Term Recurrent Convolutional Networks for Visual Recognition and Description,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 677–691, Apr. 2017.
[17] M. H. Siddiqi, M. Alruwaili, A. Ali, S. Alanazi, and F. Zeshan, “Human Activity Recognition Using Gaussian Mixture Hidden Conditional Random Fields,” Comput. Intell. Neurosci., vol. 2019, 2019.
[18] F. Serpush, M. B. Menhaj, B. Masoumi, and B. Karasfi, “Wearable Sensor-Based Human Activity Recognition in the Smart Healthcare System,” Comput. Intell. Neurosci., vol. 2022, 2022.
[19] G. Ogbuabor and R. La, “Human activity recognition for healthcare using smartphones,” in ACM International Conference Proceeding Series, Association for Computing Machinery, Feb. 2018, pp. 41–46.
[20] Y. Wang, S. Cang, and H. Yu, “A survey on wearable sensor modality centred human activity recognition in health care,” Expert Syst. Appl., vol. 137, pp. 167–190, 2019.
[21] H. Li, A. Shrestha, H. Heidari, J. Le Kernec, and F. Fioranelli, “Bi-LSTM Network for Multimodal Continuous Human Activity Recognition and Fall Detection,” IEEE Sens. J., vol. 20, no. 3, pp. 1191–1201, 2020.
[22] J. Gomez-Romero, M. A. Serrano, M. A. Patricio, J. Garcia, and J. M. Molina, “Context-based scene recognition from visual data in smart homes: an Information Fusion approach,” Pers. Ubiquitous Comput. 2011 167, vol. 16, no. 7, pp. 835–857, Sep. 2011.
[23] A. Jarraya, A. Bouzeghoub, and A. Borgi, “Online conflict resolution strategies for human activity recognition in smart homes,” J. Control Decis., 2022.
[24] Y. Kashimoto, M. Fujiwara, M. Fujimoto, H. Suwa, Y. Arakawa, and K. Yasumoto, “ALPAS: Analog-PIR-sensor-based activity recognition system in smarthome,” in Proceedings - International Conference on Advanced Information Networking and Applications, AINA, Institute of Electrical and Electronics Engineers Inc., May 2017, pp. 880–885.
[25] W. S. Lima, E. Souto, T. Rocha, R. W. Pazzi, and F. Pramudianto, “User activity recognition for energy saving in smart home environment,” in Proceedings - IEEE Symposium on Computers and Communications, Institute of Electrical and Electronics Engineers Inc., Feb. 2016, pp. 751–757.
[26] C. Shen, T. Yu, S. Yuan, Y. Li, and X. Guan, “Performance analysis of motion-sensor behavior for user authentication on smartphones,” Sensors (Switzerland), vol. 16, no. 3, Mar. 2016.
[27] Y. L. Hsu, S. C. Yang, H. C. Chang, and H. C. Lai, “Human Daily and Sport Activity Recognition Using a Wearable Inertial Sensor Network,” IEEE Access, vol. 6, pp. 31715–31728, 2018.
[28] A. Almeida and A. Alves, “Activity recognition for movement-based interaction in mobile games,” in Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2017, New York, NY, USA: Association for Computing Machinery, Inc, Sep. 2017, pp. 1–8.
[29] A. Fujii, D. Kajiwara, and K. Murao, “Cooking Activity Recognition with Convolutional LSTM Using Multi-label Loss Function and Majority Vote,” in Smart Innovation, Systems and Technologies, Springer Science and Business Media Deutschland GmbH, 2021, pp. 91–101.
[30] J. Monteiro, R. Granada, R. C. Barros, and F. Meneguzzi, “Deep neural networks for kitchen activity recognition,” in Proceedings of the International Joint Conference on Neural Networks, Institute of Electrical and Electronics Engineers Inc., Jun. 2017, pp. 2048–2055.
[31] N. Fuengfusin and H. Tamukoh, “Multi-sampling Classifiers for the Cooking Activity Recognition Challenge,” in Smart Innovation, Systems and Technologies, Springer Science and Business Media Deutschland GmbH, 2021, pp. 65–74.
[32] A. Mousavi, A. Sheikh, M. Zadeh, M. Akbari, and A. Hunter, “A New Ontology-Based Approach for Human Activity Recognition from GPS Data,” J. AI Data Min., vol. 5, no. 2, pp. 197–210, Jul. 2017.
[33] W. S. Lima, E. Souto, K. El-Khatib, R. Jalali, and J. Gama, “Human activity recognition using inertial sensors in a smartphone: An overview,” Sensors (Switzerland), 2019.
[34] J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, “Deep learning for sensor-based activity recognition: A survey,” Pattern Recognit. Lett., vol. 119, pp. 3–11, 2019.
[35] F. Demrozi, G. Pravadelli, A. Bihorac, and P. Rashidi, “Human Activity Recognition Using Inertial, Physiological and Environmental Sensors: A Comprehensive Survey,” IEEE Access, vol. 8, no. 1, pp. 210816–210836, 2020.
[36] H. F. Nweke, Y. W. Teh, M. A. Al-garadi, and U. R. Alo, “Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges,” Expert Syst. Appl., vol. 105, pp. 233–261, 2018.
[37] B. A. Bhuiyan, “An Overview of Game Theory and Some Applications,” Philos. Prog., vol. 2278, pp. 111–128, 2018.
[38] G. Bacci, S. Lasaulce, W. Saad, and L. Sanguinetti, “Game Theory for Signal Processing in Networks,” 2015. [Online]. Available: http://arxiv.org/abs/1506.00982
[39] E. Kalai, “Games in Coalitional Form,” in The New Palgrave Dictionary of Economics, 2012 Version, 2013.
[40] N. Nisan, T. Roughgarden, É. Tardos, and V. V. Vazirani, Algorithmic game theory, vol. 9780521872, no. October. 2007.
[41] B. Shahrokhzadeh and M. Dehghan, “A Distributed Game-Theoretic Approach for Target Coverage in Visual Sensor Networks,” IEEE Sens. J., vol. 17, no. 22, pp. 7542–7552, Nov. 2017.
[42] S. Soro and W. Heinzelman, “A survey of visual sensor networks,” Adv. Multimed., vol. 2009, 2009.
[43] B. Shahrokhzadeh, M. Dehghan, and M. Shahrokhzadeh, “Improving Energy-Efficient Target Coverage in Visual Sensor Networks,” vol. 10, no. 1, pp. 53–65, 2017. Accessed: Apr. 18, 2022. [Online]. Available: https://www.qjcr.ir/article_704.html
[44] H. Georgiou and M. Mavroforakis, “A game-theoretic framework for classifier ensembles using weighted majority voting with local accuracy estimates,” arxiv, vol. 2, pp. 1–21, 2013.
[45] T. Roughgarden, Twenty lectures on algorithmic game theory. 2016.
[46] A. Nowé, P. Vrancx, and Y. M. De Hauwere, “Game theory and multi-agent reinforcement learning,” Adapt. Learn. Optim., vol. 12, pp. 441–470, 2012.
[47] M. K. Sohrabi and H. Azgomi, “A Survey on the Combined Use of Optimization Methods and Game Theory,” Arch. Comput. Methods Eng., vol. 27, no. 1, pp. 59–80, 2020.
[48] A. Das Antar, M. Ahmed, and M. A. R. Ahad, “Challenges in Sensor-based Human Activity Recognition and a Comparative Analysis of Benchmark Datasets: A Review,” in 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), IEEE, May 2019, pp. 134–139.
[49] Y. Chen and C. Shen, “Performance Analysis of Smartphone-Sensor Behavior for Human Activity Recognition,” IEEE Access, vol. 5, pp. 3095–3110, 2017.
[50] W. Jiang and Z. Yin, “Human activity recognition using wearable sensors by deep convolutional neural networks,” in MM 2015 - Proceedings of the 2015 ACM Multimedia Conference, Association for Computing Machinery, Inc, Oct. 2015, pp. 1307–1310.
[51] Y. Chen, K. Zhong, J. Zhang, Q. Sun, and X. Zhao, “LSTM Networks for Mobile Human Activity Recognition,” Atlantis Press, 2016.
[52] Y. Guan and T. Plotz, “Ensembles of deep LSTM learners for activity recognition using wearables,” arXiv, vol. 1, no. 2, 2017.
[53] C. A. Ronao and S. B. Cho, “Deep convolutional neural networks for human activity recognition with smartphone sensors,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 2015, pp. 46–53.
[54] R. Grzeszick, J. M. Lenk, F. M. Rueda, G. A. Fink, S. Feldhorst, and M. Ten Hompel, “Deep neural network based human activity recognition for the order picking process,” in ACM International Conference Proceeding Series, New York, NY, USA: Association for Computing Machinery, Sep. 2017, pp. 1–6.
[55] S. Münzner, P. Schmidt, A. Reiss, M. Hanselmann, R. Stiefelhagen, and R. Dürichen, “CNN-based sensor fusion techniques for multimodal human activity recognition,” in Proceedings - International Symposium on Wearable Computers, ISWC, New York, NY, USA: Association for Computing Machinery, Sep. 2017, pp. 158–165.
[56] H. Guo, L. Chen, L. Peng, and G. Chen, “Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble,” in UbiComp 2016 - Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, New York, NY, USA: Association for Computing Machinery, Inc, Sep. 2016, pp. 1112–1123.
[57] H. Qian, S. J. Pan, B. Da, and C. Miao, “A novel distribution-embedded neural network for sensor-based activity recognition,” in IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence, 2019, pp. 5614–5620.
[58] A. Yazidi, H. L. Hammer, K. Samouylov, and E. E. Herrera-Viedma, “Game-Theoretic Learning for Sensor Reliability Evaluation without Knowledge of the Ground Truth,” IEEE Trans. Cybern., vol. 51, no. 12, pp. 5706–5716, Jan. 2021.
[59] L. M. Bruce and D. Reynolds, “Game theory based data fusion for precision agriculture applications,” in International Geoscience and Remote Sensing Symposium (IGARSS), Institute of Electrical and Electronics Engineers Inc., Nov. 2016, pp. 3563–3566.
[60] N. A. Capela, E. D. Lemaire, and N. Baddour, “Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients,” PLoS One, vol. 10, no. 4, Apr. 2015.
[61] C. C. F. Chu and D. P. K. Chan, “Feature Selection Using Approximated High-Order Interaction Components of the Shapley Value for Boosted Tree Classifier,” IEEE Access, vol. 8, pp. 112742–112750, 2020.
[62] Z. Wang, D. Wu, J. Chen, A. Ghoneim, and M. A. Hossain, “A Triaxial Accelerometer-Based Human Activity Recognition via EEMD-Based Features and Game-Theory-Based Feature Selection,” IEEE Sens. J., vol. 16, no. 9, pp. 3198–3207, 2016.
[63] A. R. Mohamed, G. E. Dahl, and G. Hinton, “Acoustic modeling using deep belief networks,” IEEE Trans. Audio, Speech Lang. Process., vol. 20, no. 1, pp. 14–22, 2012.
[64] M. A. Alsheikh, A. Selim, D. Niyato, L. Doyle, S. Lin, and H. P. Tan, “Deep activity recognition models with triaxial accelerometers,” in AAAI Workshop - Technical Report, AI Access Foundation, 2016, pp. 8–13.
[65] K. Chen, L. Yao, D. Zhang, X. Wang, X. Chang, and F. Nie, “A Semisupervised Recurrent Convolutional Attention Model for Human Activity Recognition,” IEEE Trans. Neural Networks Learn. Syst., vol. 31, no. 5, pp. 1747–1756, 2020.
[66] H. M. S. Hossain, M. A. Al Haiz Khan, and N. Roy, “DeActive: Scaling Activity Recognition with Active Deep Learning H,” Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol., vol. 2, no. 2, pp. 1–23, Jul. 2018.
[67] H. M. Sajjad Hossain and N. Roy, “Active deep learning for activity recognition with context aware annotator selection,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA: Association for Computing Machinery, Jul. 2019, pp. 1862–1870.
[68] J. Wang, Y. Chen, Y. Gu, Y. Xiao, and H. Pan, “SensoryGANs: An Effective Generative Adversarial Framework for Sensor-based Human Activity Recognition,” in Proceedings of the International Joint Conference on Neural Networks, Institute of Electrical and Electronics Engineers Inc., Oct. 2018.
[69] J. Yang, J. Fan, Z. Weiz, G. Li, T. Liu, and X. Du, “Cost-effective data annotation using game-based crowdsourcing,” in Proceedings of the VLDB Endowment, Association for Computing Machinery, Sep. 2018, pp. 57–70.
[70] S. Behpour, K. M. Kitani, and B. D. Ziebart, “ADA: Adversarial data augmentation for object detection,” in Proceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019, IEEE, Jan. 2019, pp. 1243–1252.
[71] K. T. Nguyen, F. Portet, and C. Garbay, “Dealing with Imbalanced data sets for Human Activity Recognition using Mobile Phone sensors,” 3rd Int. Work. Smart Sens. Syst., 2018. [Online]. Available: https://hal.archives-ouvertes.fr/hal-01950472/
[72] G. M. Weiss and J. W. Lockhart, “The impact of personalization on smartphone-based activity recognition,” AAAI Work. - Tech. Rep., vol. WS-12-05, pp. 98–104, 2012.
[73] P. Siirtola, H. Koskimäki, and J. Röning, “Personalizing human activity recognition models using incremental learning,” in ESANN 2018 - Proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2018, pp. 627–632.
[74] R. Saeedi, S. Norgaard, and A. H. Gebremedhin, “A closed-loop deep learning architecture for robust activity recognition using wearable sensors,” in Proceedings - 2017 IEEE International Conference on Big Data, Big Data 2017, Institute of Electrical and Electronics Engineers Inc., Jul. 2017, pp. 473–479.
[75] H. Nair, C. Tan, M. Zeng, O. J. Mengshoel, and J. P. Shen, “Attrinet: Learning mid-level features for human activity recognition with deep belief networks,” in UbiComp/ISWC 2019- - Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, Association for Computing Machinery, Inc, Sep. 2019, pp. 510–517.
[76] Y. Yang, C. Hou, Y. Lang, D. Guan, D. Huang, and J. Xu, “Open-set human activity recognition based on micro-Doppler signatures,” Pattern Recognit., vol. 85, pp. 60–69, Jan. 2019.
[77] A. Mathur et al., “Using Deep Data Augmentation Training to Address Software and Hardware Heterogeneities in Wearable and Smartphone Sensing Devices,” in Proceedings - 17th ACM/IEEE International Conference on Information Processing in Sensor Networks, IPSN 2018, Institute of Electrical and Electronics Engineers Inc., Oct. 2018, pp. 200–211.
[78] J. Wang, V. W. Zheng, Y. Chen, and M. Huang, “Deep Transfer Learning for Cross-domain Activity Recognition,” Jul. 2018. Accessed: Mar. 13, 2021. [Online]. Available: http://arxiv.org/abs/1807.07963
[79] A. Wijekoon, N. Wiratunga, S. Sani, and K. Cooper, “A knowledge-light approach to personalised and open-ended human activity recognition,” Knowledge-Based Syst., vol. 192, p. 105651, Mar. 2020.
[80] W. Jiang et al., “Towards environment independent device free human activity recognition,” in Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM, Association for Computing Machinery, Oct. 2018, pp. 289–304.
[81] B. Wang et al., “A minimax game for instance based selective transfer learning,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA: Association for Computing Machinery, Jul. 2019, pp. 34–43.
[82] L. Peng, L. Chen, Z. Ye, and Y. Zhang, “AROMA: A Deep Multi-Task Learning Based Simple and Complex Human Activity Recognition Method Using Wearable Sensors,” Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol., vol. 2, no. 2, pp. 1–16, Jul. 2018.
[83] X. Li et al., “Concurrent Activity Recognition with Multimodal CNN-LSTM Structure,” arXiv, Feb. 2017. Accessed: Mar. 13, 2021. [Online]. Available: http://arxiv.org/abs/1702.01638
[84] T. Okita and S. Inoue, “Recognition of multiple overlapping activities using compositional cnn-lstm model,” in UbiComp/ISWC 2017 - Adjunct Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers, New York, NY, USA: Association for Computing Machinery, Inc, Sep. 2017, pp. 165–168.
[85] A. Benmansour, A. Bouchachia, and M. Feham, “Multioccupant activity recognition in pervasive smart home environments,” ACM Computing Surveys, vol. 48, no. 3. Association for Computing Machinery, Dec. 01, 2015.
[86] S. Rossi, R. Capasso, G. Acampora, and M. Staffa, “A Multimodal Deep Learning Network for Group Activity Recognition,” in Proceedings of the International Joint Conference on Neural Networks, Institute of Electrical and Electronics Engineers Inc., Oct. 2018.
[87] Y. H. Jung and A. Tewari, “Online boosting algorithms for multi-label ranking,” Int. Conf. Artif. Intell. Stat. AISTATS 2018, vol. 84, pp. 279–287, 2018.
[88] P. Urbani, “Combining Deep Learning and Game Theory for Music Genre Classification,” Università Ca’ Foscari di Venezia Master’s, 2017.
[89] A. Erdem and M. Pelillo, “Graph Transduction as a Noncooperative Game,” Neural Comput., vol. 24, no. 3, pp. 700–723, Mar. 2012.
[90] A. Akbari, J. Wu, R. Grimsley, and R. Jafari, “Hierarchical signal segmentation and classification for accurate activity recognition,” in UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers, Association for Computing Machinery, Inc, Oct. 2018, pp. 1596–1605.
[91] I. M. Pires, N. Pombo, N. M. Garcia, and F. Flórez-Revuelta, “Multi-Sensor Mobile Platform for the Recognition of Activities of Daily Living and their Environments based on Artificial Neural Networks,” in IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence, 2018, pp. 5850–5852.
[92] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding,” in 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, International Conference on Learning Representations, ICLR, Oct. 2016. Accessed: Mar. 13, 2021. [Online]. Available: https://arxiv.org/abs/1510.00149v5
[93] Z. Chen, L. Zhang, Z. Cao, and J. Guo, “Distilling the Knowledge from Handcrafted Features for Human Activity Recognition,” IEEE Trans. Ind. Informatics, vol. 14, no. 10, pp. 4334–4342, 2018.
[94] X. Wang, Y. Sun, R. Zhang, and J. Qi, “KDGAN: Knowledge distillation with generative adversarial networks,” Adv. Neural Inf. Process. Syst., vol. 2018-Decem, no. NeurIPS, pp. 775–786, 2018.
[95] Y. Iwasawa, I. E. Yairi, K. Nakayama, and Y. Matsuo, “Privacy issues regarding the application of DNNs to activity-recognition using wearables and its countermeasures by use of adversarial training,” in IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence, 2017, pp. 1930–1936.
[96] M. Malekzadeh, A. Cavallaro, R. G. Clegg, and H. Haddadi, “Protecting sensory data against sensitive inferences,” in Proceedings of the Workshop on Privacy by Design in Distributed Systems, P2DS 2018, co-located with European Conference on Computer Systems, EuroSys 2018, New York, NY, USA: Association for Computing Machinery, Inc, Apr. 2018, pp. 1–6.
[97] M. Malekzadeh, R. G. Clegg, A. Cavallaro, and H. Haddadi, “Mobile sensor data anonymization,” in IoTDI 2019 - Proceedings of the 2019 Internet of Things Design and Implementation, New York, NY, USA: Association for Computing Machinery, Inc, Apr. 2019, pp. 49–58.
[98] N. Phan, Y. Wang, X. Wu, D. D.-T. C. A. sobre Artificial, and U. 2016, “Differential Privacy Preservation for Deep Auto-Encoders: An Application of Human Behavior Prediction NhatHai,” in Aaai.Org, 2016, pp. 1309–1316. [Online]. Available: https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/viewPaper/12174
[99] A. Miyaji and M. S. Rahman, “Privacy-preserving data mining: A game-theoretic approach,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, Berlin, Heidelberg, 2011, pp. 186–200.
[100] Y. Huang, “Game Theory Based Privacy Protection for Context-Aware Services,” Georgia State University, 2019. Accessed: Apr. 14, 2021. [Online]. Available: https://scholarworks.gsu.edu/cs_diss/151
[101] M. Zeng et al., “Understanding and improving recurrent networks for human activity recognition by continuous attention,” in Proceedings - International Symposium on Wearable Computers, ISWC, Association for Computing Machinery, Oct. 2018, pp. 56–63.
[102] Y. Tang, J. Xu, K. Matsumoto, and C. Ono, “Sequence-To-Sequence Model with Attention for Time Series Classification,” in IEEE International Conference on Data Mining Workshops, ICDMW, IEEE Computer Society, Jul. 2016, pp. 503–510.
[103] Y. H. Shen, K. X. He, and W. Q. Zhang, “SAM-GCNN: A Gated Convolutional Neural Network with Segment-Level Attention Mechanism for Home Activity Monitoring,” in 2018 IEEE International Symposium on Signal Processing and Information Technology, ISSPIT 2018, Institute of Electrical and Electronics Engineers Inc., Feb. 2019, pp. 679–684.
[104] K. Chen et al., “Interpretable Parallel Recurrent Neural Networks with Convolutional Attentions for Multi-Modality Activity Modeling,” Proc. Int. Jt. Conf. Neural Networks, vol. 2018-July, 2018.
[105] X. Zhang et al., “Multi-modality sensor data classification with selective attention,” IJCAI Int. Jt. Conf. Artif. Intell., vol. 2018-July, pp. 3111–3117, 2018.
[106] K. Chen, L. Yao, D. Zhang, B. Guo, and Z. Yu, “Multi-agent attentional activity recognition,” IJCAI Int. Jt. Conf. Artif. Intell., vol. 2019-Augus, pp. 1344–1350, 2019.
[107] J. He, Q. Zhang, L. Wang, and L. Pei, “Weakly Supervised Human Activity Recognition from Wearable Sensors by Recurrent Attention Learning,” IEEE Sens. J., vol. 19, no. 6, pp. 2287–2297, 2019.
[108] K. Ethayarajh and D. Jurafsky, “Attention Flows are Shapley Value Explanations,” May 2021. [Online]. Available: http://arxiv.org/abs/2105.14652
[109] P. Vepakomma, D. De, S. K. Das, and S. Bhansali, “A-Wristocracy: Deep learning on wrist-worn sensing for recognition of user complex activities,” 2015 IEEE 12th Int. Conf. Wearable Implant. Body Sens. Networks, BSN 2015, Oct. 2015.
[110] WangYanwen and ZhengYuanqing, “Modeling RFID Signal Reflection for Contact-free Activity Recognition,” Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol., vol. 2, no. 4, pp. 1–22, Dec. 2018.
[111] J. H. Choi and J. S. Lee, “Confidence-based Deep Multimodal Fusion for Activity Recognition,” UbiComp/ISWC 2018 - Adjun. Proc. 2018 ACM Int. Jt. Conf. Pervasive Ubiquitous Comput. Proc. 2018 ACM Int. Symp. Wearable Comput., pp. 1548–1556, Oct. 2018.
[112] J. B. Yang, M. N. Nguyen, P. P. San, X. L. Li, and S. Krishnaswamy, “Deep convolutional neural networks on multichannel time series for human activity recognition,” in IJCAI International Joint Conference on Artificial Intelligence, 2015, pp. 3995–4001.
[113] S. S. Khan and B. Taati, “Detecting unseen falls from wearable devices using channel-wise ensemble of autoencoders,” Expert Syst. Appl., vol. 87, pp. 280–290, Nov. 2017.
[114] S. Woo, J. Byun, S. Kim, H. M. Nguyen, J. Im, and D. Kim, “RNN-Based Personalized Activity Recognition in Multi-person Environment Using RFID,” in 2016 IEEE International Conference on Computer and Information Technology (CIT), IEEE, Dec. 2016, pp. 708–715.
[115] S. A. Rokni, M. Nourollahi, and H. Ghasemzadeh, “Personalized Human Activity Recognition Using Convolutional Neural Networks,” 2018. Accessed: Feb. 10, 2021. [Online]. Available: www.aaai.org
[116] D. Tao, Y. Wen, and R. Hong, “Multicolumn Bidirectional Long Short-Term Memory for Mobile Devices-Based Human Activity Recognition,” IEEE Internet Things J., vol. 3, no. 6, pp. 1124–1134, Dec. 2016.
[117] W. Z. Wang, Y. W. Guo, B. Y. Huang, G. R. Zhao, B. Q. Liu, and L. Wang, “Analysis of filtering methods for 3D acceleration signals in body sensor network,” Proc. 2011 Int. Symp. Bioelectron. Bioinformatics, ISBB 2011, pp. 263–266, 2011.