Document Type : Original/Review Paper


1 Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran.

2 Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran



Nowadays, given the rapid progress in pattern recognition, new ideas such as theoretical mathematics can be exploited to improve the efficiency of these tasks. In this paper, the Discrete Wavelet Transform (DWT) is used as a mathematical framework to demonstrate handwritten digit recognition in spiking neural networks (SNNs). The motivation behind this method is that the wavelet transform can divide the spike information and noise into separate frequency subbands and also store the time information. The simulation results show that DWT is an effective and worthy choice and brings the network to an efficiency comparable to previous networks in the spiking field. Initially, DWT is applied to MNIST images in the network input. Subsequently, a type of time encoding called constant-current-Leaky Integrate and Fire (LIF) encoding is applied to the transformed data. Following this, the encoded images are input to the multilayer convolutional spiking network. In this architecture, various wavelets have been investigated, and the highest classification accuracy of 99.25% is achieved.


[1] Gabor, “Theory of communication. Part 1: The analysis of information,”Journal of the Institution of Electrical Engineers, Vol. 93, No. 26, pp. 429-441, 1946.
[2] S. G. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,” IEEE transactions on pattern analysis and machine intelligence, Vol. 11, No. 7, pp. 674-693, 1989.
[3] G. Strang and T. Nguyen, Wavelet and Filter banks. Wellesley-Cambridge Press, 1997.
[4] I. Daubechies, “Orthonormal bases of compactly supported wavelets,” Communications on pure and applied mathematics, Vol. 41, No. 7, pp. 909-996, 1988.
[5] C. K. Chui, An introduction to wavelets (Vol. 1). Academic Press, 1992.
[6] M. Farge, “Wavelet transforms and their applications to turbulence,” Annual review of fluid mechanics, Vol. 24, No. 1, pp. 395-458, 1992.
[7] V. Sze, Y. H. Chen, T. J. Yang, and J. S. Emer, “Efficient Processing of Deep Neural Networks: A Tutorial and Survey,” Proceedings of the IEEE, Vol. 105, No. 12, pp. 2295–2329, 2017.
[8] W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, and F. E. Alsaadi, “A survey of deep neural network architectures and their applications,” Neurocomputing, Vol. 234, pp. 11–26, 2017.
[9] T. Bouwmans, S. Javed, M. Sultana, and S. K. Jung, “Deep neural network concepts for background subtraction: A systematic review and comparative evaluation,” Neural Networks, Vol. 117, pp. 8–66, 2019.
[10] M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, M. Hasan, B. C. Van Essen, A. A. S. Awwal, and V. K. Asari, “A state-of-the-art survey on deep learning theory and architectures,” Electronics, Vol. 8, No. 3, p. 292, 2019.
[11] A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier and A. Maida, “Deep learning in spiking neural networks,” Neural Networks, Vol. 111, pp. 47–63, 2019.
[12] M. Pfeiffer and T. Pfeil, “Deep Learning With Spiking Neurons: Opportunities and Challenges,” Frontiers in Neuroscience, Vol. 12, p. 774, 2018.
[13] T. H. Rafi, “A Brief Review on Spiking Neural Network - a Biological Inspiration,” Preprints 2021, 2021040202 (doi: 10.20944/preprints202104.0202.v1).
[14] S. A. Mohamed, M. Othman and M. Hafizul Afifi, “A review on data clustering using spiking neural network (SNN) models,” Indonesian Journal of Electrical Engineering and Computer Science, Vol. 15, No. 3, pp. 1392–1400, September 2019.
[15] A. Taherkhani, A. Belatreche, Y. Li, G. Cosma, L. P. Maguire, and T. M. McGinnity, “A review of learning in biologically plausible spiking neural networks,” Neural Networks, Vol. 122, pp. 253–272, February 2020.
[16] A. E. Hassanien, A. Abraham, and C. Grosan, “Spiking neural network and wavelets for hiding iris data in digital images,” Soft Computing, Vol. 13, No. 4, pp. 401-416, February 2009.
[17] Z. Zhang, Q. Wu, Z. Zhuo, X. Wang, and L. Huang, “Wavelet transform and texture recognition based on spiking neural network for visual images,” Neurocomputing, Vol. 151, pp. 985-995, 2015.
[18] M. Mozafari, S. R. Kheradpisheh, T. Masquelier, A. Nowzari-Dalini, and M. Ganjtabesh, “First-spike-based visual categorization using reward-modulated STDP,” IEEE transactions on neural networks and learning systems, Vol. 29, No. 12, pp. 6178-6190, May 2018.
[19] M. Mozafari, M. Ganjtabesh, A. Nowzari-Dalini, S. J. Thorpe, and T. Masquelier, “Bio-inspired digit recognition using reward-modulated spike-timing-dependent plasticity in deep convolutional networks,” Pattern Recognition, Vol. 94, pp. 87-95, October 2019.
[20] M. Mozafari, M. Ganjtabesh, A. Nowzari-Dalini, and T. Masquelier, “SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron,” Frontiers in Neuroscience, Vol. 13, pp. 1–12, July 2019.
[21] S. R. Kheradpisheh, M. Ganjtabesh and T. Masquelier, “Bio-inspired unsupervised learning of visual features leads to robust invariant object recognition,” Neurocomputing, Vol. 205, pp. 382–392, September 2016.
[22] S. R. Kheradpisheh, M. Ganjtabesh, S. J. Thorpe, and T. Masquelier, “STDP-based spiking deep convolutional neural networks for object recognition,” Neural Networks, Vol. 99, pp. 56–67, March 2018.
[23] R. Vaila, J. Chiasson, and V. Saxena, “A Deep Unsupervised Feature Learning Spiking Neural Network With Binarized Classification Layers for the EMNIST Classification,” IEEE Transactions on Emerging Topics in Computational Intelligence, Vol. 6, No. 1, pp. 124–135, 2022.
[24] A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of Physiology, Vol. 117, No. 4, p. 500, 1952.
[25] E. M. Izhikevich, “Which Model to Use for Cortical Spiking Neurons?,” IEEE Transactions on Neural Networks, Vol. 15, No. 5, pp. 1063–1070. September 2004.
[26] J. P. Keener, F. C. Hoppensteadt, and J. Rinzel, “Integrate-and-Fire Models of Nerve Membrane Response to Oscillatory Input,” SIAM Journal on Applied Mathematics, Vol. 41, No. 3, pp. 503–517, 1981.
[27] W. Gerstner and W. M. Kistler, Spiking neuron models: Single neurons, populations, plasticity. Cambridge university press, 2002.
[28] A. N. Burkitt, “A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties,” Biological cybernetics, Vol. 95, No. 2, pp. 97-112, 2006.
[29] R. A. Vazquez and A. Cachon, “Integrate and Fire neurons and their application in pattern recognition,” in 2010 7th International Conference on Electrical Engineering Computing Science and Automatic Control, CCE 2010, 2010, pp. 424–428.
[30] L. Long and G. Fang, “A review of biologically plausible neuron models for spiking neural networks,” AIAA Infotech @ Aerospace 2010, p. 3540.
[31] E. Hunsberger and C. Eliasmith, “Spiking deep networks with LIF neurons,” arXiv preprint arXiv:1510.08829, 2015.
[32] G. Beylkin, “On the representation of operators in bases of compactly supported wavelets,” SIAM Journal on Numerical Analysis, Vol. 6, No. 6, pp. 1716-1740, December 1992.
[33] I. Daubechies, Ten lectures on wavelets. Society for industrial and applied mathematics, 1992.
[34] S. G. Mallat, “Multifrequency channel decompositions of images and wavelet models,” IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 37, No. 12, pp. 2091-2110, 1989.
[35] S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Transactions on pattern analysis and machine intelligence, Vol. 14, No. 7, pp. 710-732, July 1992.
[36] M. Sifuzzaman, M. R. Islam, and M. Z. Ali, “Application of wavelet transform and its advantages compared to Fourier transform,” Journal of Physical Sciences, Vol. 13, No. 1, pp. 121-134, 2009.
[37] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, “Image coding using wavelet transform,” IEEE Transactions on image processing, Vol. 1, No. 2, pp. 205-220, April 1992.
[38] R. Van Rullen and S. J. Thorpe, “Rate Coding Versus Temporal Order Coding: What the Retinal Ganglion Cells Tell the Visual Cortex,” Neural Computation, Vol. 13, No. 6, pp. 1255–1283, 2001.
[39] N. Abderrahmane, E. Lemaire, and B. Miramond, “Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence,” Neural Networks, Vol. 121, pp. 366-386, January 2020.
[40] C. Pehle and J. E. Pedersen, “Norse: A library to do deep learning with spiking neural networks,” GitHub, 2019. Available:
[41] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, Vol. 86, No. 11, pp. 2278–2324, 1998.
[42] L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE signal processing magazine, Vol. 29, No. 6, pp. 141-142, November 2012.
[43] M. R. Shamsuddin, S. Abdul-Rahman, and A. Mohamed, “Exploratory analysis of MNIST handwritten digit for machine learning modelling,” in International Conference on Soft Computing in Data Science, Springer, Singapore, Aug 2018, pp. 134-145.
[44] C. Lee, P. Panda, G. Srinivasan, and K. Roy, “Training Deep Spiking Convolutional Neural Networks With STDP-based Unsupervised Pre-training Followed by Supervised Fine-Tuning,” Frontiers in Neuroscience, Vol. 12, pp. 1–13, August 2018.
[45] G. Srinivasan and K. Roy, “ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing,” Frontiers in Neuroscience, Vol. 13, pp. 1–18, March 2019.
[46] S. R. Kheradpisheh and T. Masquelier, “Temporal backpropagation for spiking neural networks with one spike per neuron,” International Journal of Neural Systems, Vol. 30, No. 6, p. 2050027, 2020.