Neuromorphic Spiking Neural Network Algorithms

  • Living reference work entry
  • First Online:
Handbook of Neuroengineering

Abstract

Artificial neural networks (ANN) trained by deep learning have shown tremendous success in audio, visual, and decision-making tasks. While these methods are loosely inspired by the brain, in terms of actual implementation, the similarity between mammalian brain and these algorithms is merely superficial. Moreover, more often than not, these algorithms require huge energy for real-world tasks due to their computation and memory heavy nature, which limits their potential application in energy-constrained scenarios. A prime reason for that is that unlike their biological counterparts, these algorithms were designed with the primary goal of increasing accuracy on some benchmark tasks. Spiking neural networks (SNN) bridge the gap between artificial algorithms and the biological model of brain due to their asynchronous spike-based signal processing model that closely resembles that of the brain. SNNs have drawn significant attention in recent years due to their energy efficiency, compatibility with low-power neuromorphic hardware, and event-based sensors. In this chapter, we give an exhaustive analysis of different learning algorithms proposed over past two decades for training SNNs. The proposed learning algorithms are broadly classified into two types: conversion-based and spike-based learning. The advantages, drawbacks, and potential application of each type of algorithms are systematically described. We also report on accuracy achieved by these algorithms in benchmark datasets. Recent works on learning algorithms and neuromorphic hardware implementations show that SNNs have the potential to reach state-of-the-art accuracy on several tasks at a fraction of energy cost compared to their deep learning counterparts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

References

  1. Javed, F., He, Q., Davidson, L.E., Thornton, J.C., Albu, J., Boxt, L., Krasnow, N., Elia, M., Kang, P., Heshka, S.: Brain and high metabolic rate organ mass: contributions to resting energy expenditure beyond fat-free mass. Am. J. Clin. Nutr. 91(4), 907–912 (2010)

    Article  Google Scholar 

  2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  4. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484 (2016)

    Article  Google Scholar 

  5. Pfeiffer, M., Pfeil, T.: Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. 12, 774 (2018)

    Article  Google Scholar 

  6. Basu, A., Acharya, J., Karnik, T., Liu, H., Li, H., Seo, J.-S., Song, C.: Low-power, adaptive neuromorphic systems: recent progress and future directions. IEEE J. Emerg. Sel. Top. Circuits Syst. 8(1), 6–27 (2018)

    Article  Google Scholar 

  7. Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10(9), 1659–1671 (1997)

    Article  Google Scholar 

  8. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  9. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  10. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

    Google Scholar 

  11. Hinton, G., Deng, L., Yu, D., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)

    Article  Google Scholar 

  12. Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J.: Recent advances in deep learning for speech research at Microsoft. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8604–8608 (2013)

    Google Scholar 

  13. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011)

    MATH  Google Scholar 

  14. Silver, D., Huang, A., Maddison, C.J.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)

    Article  Google Scholar 

  15. Liu, S.-C., Delbruck, T., Indiveri, G., Whatley, A., Douglas, R.: Event-Based Neuromorphic Systems. Wiley, Hoboken (2014)

    Google Scholar 

  16. Yang, M., Liu, S.-C., Delbruck, T.: A dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding. IEEE J. Solid-State Circuits 50(9), 2149–2160 (2015)

    Article  Google Scholar 

  17. Rasouli, M., Yi, C., Basu, A., Thakor, N.V., Kukreja, S.: Spike-based tactile pattern recognition using an extreme learning machine. In: 2015 IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 1–4 (2015)

    Google Scholar 

  18. Diehl, P.U., Cook, M.: Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front. Comput. Neurosci. 9, 99 (2015)

    Article  Google Scholar 

  19. Stein, R.B.: Some models of neuronal variability. Biophys. J. 7(1), 37–68 (1967)

    Article  Google Scholar 

  20. Gerstner, W.: Time structure of the activity in neural network models. Phys. Rev. E 51(1), 738 (1995)

    Article  MathSciNet  Google Scholar 

  21. Gerstner, W., Kistler, W.M.: Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, Cambridge (2002)

    Book  MATH  Google Scholar 

  22. Hussain, S., Basu, A.: Morphological learning in multi-compartment neuron model with binary synapses. In: International Symposium on Circuits and Systems (ISCAS) (2016)

    Google Scholar 

  23. Arthur, J., Boahen, K.: Synchrony in silicon: the gamma rhythm. IEEE Trans. Neural Netw. Learn. Syst. 18(6), 1815–1825 (2007)

    Article  Google Scholar 

  24. Gerstner, W., Kistler, W.M., Naud, R., Paninski, L.: Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press, Cambridge (2014)

    Book  Google Scholar 

  25. Roy, K., Jaiswal, A., Panda, P.: Towards spike-based machine intelligence with neuromorphic computing. Nature 575(7784), 607–617 (2019)

    Article  Google Scholar 

  26. Benjamin, B.V., Gao, P., McQuinn, E., Choudhary, S., Chandrasekaran, A.R., Bussat, J.-M., Alvarez-Icaza, R., Arthur, J.V., Merolla, P.A., Boahen, K.: Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102(5), 699–716 (2014)

    Article  Google Scholar 

  27. Davies, M., Srinivasa, N., Lin, T.-H., Chinya, G., Cao, Y., Choday, S.H., Dimou, G., Joshi, P., Imam, N., Jain, S.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)

    Article  Google Scholar 

  28. Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., Cassidy, A.S., Sawada, J., Akopyan, F., Jackson, B.L., Imam, N., Guo, C., Nakamura, Y.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)

    Article  Google Scholar 

  29. Diehl, P.U., Pedroni, B.U., Cassidy, A., Merolla, P., Neftci, E., Zarrella, G.: Truehappiness: neuromorphic emotion recognition on truenorth. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 4278–4285 (2016)

    Google Scholar 

  30. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  31. Orchard, G., Jayawant, A., Cohen, G.K., Thakor, N.: Converting static image datasets to spiking neuromorphic datasets using saccades. Front. Neurosci. 9, 437 (2015)

    Article  Google Scholar 

  32. Leonard, R.G., Doddington, G.: TIDIGITS Speech Corpus. Texas Instruments, Inc (1993)

    Google Scholar 

  33. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

    Google Scholar 

  34. O’Connor, P., Neil, D., Liu, S.-C., Delbruck, T., Pfeiffer, M.: Real-time classification and sensor fusion with a spiking deep belief network. Front. Neurosci. 7, 178 (2013)

    Google Scholar 

  35. Siegert, A.J.: On the first passage time probability problem. Phys. Rev. 81(4), 617 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  36. Jug, F., Cook, M., Steger, A.: Recurrent competitive networks can learn locally excitatory topologies. In: The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2012)

    Google Scholar 

  37. Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  38. Esser, S.K., Appuswamy, R., Merolla, P., Arthur, J.V., Modha, D.S.: Backpropagation for energy-efficient neuromorphic computing. In: Advances in Neural Information Processing Systems, pp. 1117–1125 (2015)

    Google Scholar 

  39. Hunsberger, E., Eliasmith, C.: Spiking deep networks with LIF neurons. ar**v preprint ar**v:1510.08829 (2015)

    Google Scholar 

  40. Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10 and cifar-100 datasets 6 (2009). https:// www.cs.toronto.edu/kriz/cifar.html

  41. Diehl, P.U., Neil, D., Binas, J., Cook, M., Liu, S.-C., Pfeiffer, M.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2015)

    Google Scholar 

  42. Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 113(1), 54–66 (2015)

    Article  MathSciNet  Google Scholar 

  43. Esser, S., Merolla, P., Arthur, J., Cassidy, A., Appuswamy, R., Andreopoulos, A., Berg, D., McKinstry, J., Melano, T., Barch, D.: Convolutional networks for fast, energy-efficient neuromorphic computing (2016). Preprint on Ar**v. http://arxiv.org/abs/1603.08270. Accessed 27 2016

  44. Liu, Q., Furber, S.: Noisy Softplus: a biology inspired activation function. In: International Conference on Neural Information Processing pp. 405–412 (2016)

    Google Scholar 

  45. Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M., Liu, S.-C.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017)

    Article  Google Scholar 

  46. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. ar**v preprint ar**v:1409.1556 (2014)

    Google Scholar 

  47. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  48. Sengupta, A., Ye, Y., Wang, R., Liu, C., Roy, K.: Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13, 95 (2019)

    Article  Google Scholar 

  49. Bohte, S.M., Kok, J.N., La Poutre, H.: Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48(1–4), 17–37 (2002)

    Article  MATH  Google Scholar 

  50. **n, J., Embrechts, M.J.: Supervised learning with spiking neural networks. In: IJCNN’01. International Joint Conference on Neural Networks. Proceedings (Cat. No. 01CH37222), vol. 3, pp. 1772–1777 (2001)

    Google Scholar 

  51. McKennoch, S., Liu, D., Bushnell, L.G.: Fast modifications of the spikeprop algorithm. In: The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 3970–3977 (2006)

    Google Scholar 

  52. Reed, R., Marks II, R.J.: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. MIT Press, Cambridge (1999)

    Book  Google Scholar 

  53. Booij, O., tat Nguyen H.: A gradient descent rule for spiking neurons emitting multiple spikes, Inf. Proces. Lett. 95(6), 552–558 (2005)

    Google Scholar 

  54. Ghosh-Dastidar, S., Adeli, H.: A new supervised learning algorithm for multiple spiking neural networks with application in epilepsy and seizure detection. Neural Netw. 22(10), 1419–1431 (2009)

    Article  Google Scholar 

  55. Xu, Y., Zeng, X., Han, L., Yang, J.: A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks. Neural Netw. 43, 99–113 (2013)

    Article  MATH  Google Scholar 

  56. Shrestha, S.B., Song, Q.: Event based weight update for learning infinite spike train. In: 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 333–338 (2016)

    Google Scholar 

  57. Shrestha, S.B., Song, Q.: Robust spike-train learning in spike-event based weight update. Neural Netw. 96, 33–46 (2017)

    Article  MATH  Google Scholar 

  58. Ponulak, F., Kasiński, A.: Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Comput. 22(2), 467–510 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  59. Widrow, B., Hoff, M.E.: Adaptive Switching Circuits. Ire Wescon Convention Record. Los Angeles, California (1960)

    Book  Google Scholar 

  60. Roberts, P.D., Bell, C.C.: Spike timing dependent synaptic plasticity in biological systems. Biol. Cybern. 87(5–6), 392–403 (2002)

    Article  MATH  Google Scholar 

  61. Taherkhani, A., Belatreche, A., Li, Y., Maguire, L.P.: DL-ReSuMe: a delay learning-based remote supervised method for spiking neurons. IEEE Trans. Neural Netw. Learn. Syst. 26(12), 3137–3149 (2015)

    Article  MathSciNet  Google Scholar 

  62. Sporea, I., Grüning, A.: Supervised learning in multilayer spiking neural networks. Neural Comput. 25(2), 473–509 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  63. Heeger, D.: Poisson model of spike generation. Handout Univ Stanford 5, 1–13 (2000)

    Google Scholar 

  64. Gütig, R., Sompolinsky, H.: The tempotron: a neuron that learns spike timing–based decisions. Nature Neurosci. 9(3), 420 (2006)

    Article  Google Scholar 

  65. Gütig, R.: Spiking neurons can discover predictive features by aggregate-label learning. Science 351(6277), aab4113 (2016)

    Google Scholar 

  66. Yu, Q., Li, H., Tan, K.C.: Spike timing or rate? Neurons learn to make decisions for both through threshold-driven plasticity. IEEE Trans. Cybern. 49, 1–12 (2018)

    Google Scholar 

  67. Florian, R.V.: The chronotron: a neuron that learns to fire temporally precise spike patterns. PloS One 7(8), e40233 (2012)

    Article  Google Scholar 

  68. Victor, J.D., Purpura, K.P.: Nature and precision of temporal coding in visual cortex: a metric-space analysis. J. Neurophysiol. 76(2), 1310–1326 (1996)

    Article  Google Scholar 

  69. Mohemmed, A., Schliebs, S., Matsuda, S., Kasabov, N.: Method for training a spiking neuron to associate input-output spike trains. In: Engineering Applications of Neural Networks, pp. 219–228. Springer, Berlin/Heidelberg (2011)

    Google Scholar 

  70. Yu, Q., Tang, H., Tan, K.C., Li, H.: Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns. Plos One 8(11), e78318 (2013)

    Article  Google Scholar 

  71. O’Connor, P., Welling, M.: Deep spiking networks. ar**v preprint ar**v:1602.08323 (2016)

    Google Scholar 

  72. Lee, J.H., Delbruck, T., Pfeiffer, M.: Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016)

    Google Scholar 

  73. Grossberg, S.: Competitive learning: from interactive activation to adaptive resonance. Cogn. Sci. 11(1), 23–63 (1987)

    Article  Google Scholar 

  74. Lillicrap, T.P., Cownden, D., Tweed, D.B., Akerman, C.J.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7, 1–10 (2016)

    Article  Google Scholar 

  75. Neftci, E.O., Augustine, C., Paul, S., Detorakis, G.: Event-driven random back-propagation: enabling neuromorphic deep learning machines. Front. Neurosci. 11, 324 (2017)

    Article  Google Scholar 

  76. Cohen, G., Afshar, S., Tapson, J., van Schaik, A.: EMNIST: an extension of MNIST to handwritten letters. ar**v preprint ar**v:1702.05373 (2017)

    Google Scholar 

  77. Wu, Y., Deng, L., Li, G., Zhu, J., Shi, L.: Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12, 331 (2018)

    Article  Google Scholar 

  78. **, Y., Zhang, W., Li, P.: Hybrid macro/micro level backpropagation for training deep spiking neural networks. In: Advances in Neural Information Processing Systems, pp. 7005–7015 (2018)

    Google Scholar 

  79. Kulkarni, S.R., Rajendran, B.: Spiking neural networks for handwritten digit recognition–Supervised learning and network optimization. Neural Netw. 103, 118–127 (2018)

    Article  Google Scholar 

  80. Anwani, N., Rajendran, B.: Normad-normalized approximate descent based supervised learning rule for spiking neurons. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2015)

    Google Scholar 

  81. Shrestha, S.B., Orchard, G.: Slayer: spike layer error reassignment in time. In: Advances in Neural Information Processing Systems, pp. 1412–1421 (2018)

    Google Scholar 

  82. DePasquale, B., Churchland, M.M., Abbott, L.: Using firing-rate dynamics to train recurrent networks of spiking model neurons. ar**v preprint ar**v:1601.07620 (2016)

    Google Scholar 

  83. Bellec, G., Salaj, D., Subramoney, A., Legenstein, R., Maass, W.: Long short-term memory and learning-to-learn in networks of spiking neurons. In: Advances in Neural Information Processing Systems, pp. 787–797 (2018)

    Google Scholar 

  84. Bellec, G., Kappel, D., Maass, W., Legenstein, R.: Deep rewiring: training very sparse deep networks. ar**v preprint ar**v:1711.05136 (2017)

    Google Scholar 

  85. Le, Q.V., Jaitly, N., Hinton, G.E.: A simple way to initialize recurrent networks of rectified linear units. ar**v preprint ar**v:1504.00941 (2015)

    Google Scholar 

  86. Garofolo, J.S., Lamel, L.F., Fisher, W.M., Fiscus, J.G., Pallett, D.S.: DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report n 93 (1993)

    Google Scholar 

  87. Abbott, L., Nelson, S.: Synaptic plasticity: taming the beast. Nat. Neurosci. 3, 1178–1183 (2000)

    Article  Google Scholar 

  88. Markram, H., Gerstner, W., Sjostrom, P.J.: Spike-timing-dependent plasticity: a comprehensive overview. Front. Synaptic Neurosci. 4, 2 (2012)

    Article  Google Scholar 

  89. Froemke, R., Dan, Y.: Spike-timing-dependent synaptic modification induced by natural spike trains. Nature 416(6879), 433–438 (2002)

    Article  Google Scholar 

  90. Bi, G., Poo, M.: Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18(24), 10464–10472 (1998)

    Article  Google Scholar 

  91. Iannella, N., Tanaka, S.: Synaptic efficacy cluster formation across the dendrite via STDP. Neurosci. Lett. 403(1–2), 24–29 (2006)

    Article  Google Scholar 

  92. Iannella, N., Launey, T., Tanaka, S.: Spike timing-dependent plasticity as the origin of the formation of clustered synaptic efficacy engrams. Front. Comput. Neurosci. 4(20), 21 (2010)

    Google Scholar 

  93. Song, S., Miller, K.D., Abbott, L.F.: Competitive hebbian learning though spike-timing-dependent synaptic plasticity. Nat. Neurosci. 3(9), 919–926 (2000)

    Article  Google Scholar 

  94. Maass, W.: On the computational power of winner-take-all. Neural Comput. 12(11), 2519–2535 (2000)

    Article  Google Scholar 

  95. Kheradpisheh, S.R., Ganjtabesh, M., Thorpe, S.J., Masquelier, T.: STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw. 99, 56–67 (2018)

    Article  Google Scholar 

  96. Leibe, B., Schiele, B.: Analyzing appearance and contour based methods for object categorization. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings, vol. 2, pp. II–409 (2003)

    Google Scholar 

  97. Brader, J.M., Senn, W., Fusi, S.: Learning real-world stimuli in a neural network with spike-driven synaptic dynamics. Neural Comput. 19(11), 2881–2912 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  98. Florian, R.V.: Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Comput. 19(6), 1468–1502 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  99. Mozafari, M., Kheradpisheh, S.R., Masquelier, T., Nowzari-Dalini, A., Ganjtabesh, M.: First-spike-based visual categorization using reward-modulated STDP. IEEE Trans. Neural Netw. Learn. Syst. 29(99), 1–13 (2018)

    Google Scholar 

  100. Mozafari, M., Ganjtabesh, M., Nowzari-Dalini, A., Thorpe, S.J., Masquelier, T.: Bio-inspired digit recognition using reward-modulated spike-timing-dependent plasticity in deep convolutional networks. Pattern Recogn. 94, 87–95 (2019)

    Article  Google Scholar 

  101. Wade, J.J., McDaid, L.J., Santos, J.A., Sayers, H.M.: SWAT: a spiking neural network training algorithm for classification problems. IEEE Trans. Neural Netw. 21(11), 1817–1830 (2010)

    Article  Google Scholar 

  102. Jedlicka, P.: Synaptic plasticity, metaplasticity and BCM theory. Bratislavské lekárske listy 103(4/5), 137–143 (2002)

    Google Scholar 

  103. Knott, G.W., Holtmaat, A., Wilbrecht, L., Welker, E., Svoboda, K.: Spine growth precedes synapse formation in the adult neocortex in vivo. Nat. Neurosci. 9(9), 1117–1124 (2006)

    Article  Google Scholar 

  104. Chklovskii, D.B., Mel, B.W., Svoboda, K.: Cortical rewiring and information storage. Nature 431(7010), 782–788 (2004)

    Article  Google Scholar 

  105. Butz, M., Worgotter, F., Ooyen, A.V.: Activity-dependent structural plasticity. Brain Res. Rev. 60(2), 287–305 (2009)

    Article  Google Scholar 

  106. Roy, S., Basu, A.: An online unsupervised structural plasticity algorithm for spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(4), 900–910 (2017)

    Article  Google Scholar 

  107. Roy, S., Basu, A.: An online unsupervised structural plasticity algorithm for spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(4), 900–910 (2016)

    Article  Google Scholar 

  108. Park, D.S., Chan, W., Zhang, Y., Chiu, C.-C., Zoph, B., Cubuk, E.D., Le, Q.V.: SpecAugment: a simple data augmentation method for automatic speech recognition. In: Proceedings of Interspeech 2019, pp. 2613–2617 (2019)

    Google Scholar 

  109. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (1) (2019)

    Google Scholar 

  110. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  111. Kim, R., Li, Y., Sejnowski, T.J.: Simple framework for constructing functional spiking recurrent neural networks. Proc. Natl. Acad. Sci. 116(45), 22811–22820 (2019)

    Article  Google Scholar 

  112. Binas, J., Neil, D., Liu, S.-C., Delbruck, T.: DDD17: end-to-end DAVIS driving dataset. ar**v preprint ar**v:1711.01458 (2017)

    Google Scholar 

  113. Anumula, J., Neil, D., Delbruck, T., Liu, S.-C.: Feature representations for neuromorphic audio spike streams. Front. Neurosci. 12, 23 (2018)

    Article  Google Scholar 

  114. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M.: Tensorflow: a system for large-scale machine learning. In: 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp. 265–283 (2016)

    Google Scholar 

  115. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017)

    Google Scholar 

  116. Bekolay, T., Bergstra, J., Hunsberger, E., DeWolf, T., Stewart, T., Rasmussen, D., Choo, X., Voelker, A., Eliasmith, C.: Nengo: a Python tool for building large-scale functional brain models. Front. Neuroinformatics 7(48), 1–13 (2014). https://doi.org/10.3389/fninf.2013.00048

    Google Scholar 

  117. Bohte, S.M., La Poutré, H., Kok, J.N.: Error-Backpropagation in Temporally Encoded Networks of Spiking Neurons. Centrum voor Wiskunde en Informatica, Amsterdam (2000)

    MATH  Google Scholar 

  118. Dua, D., Graff, C.: UCI Machine Learning Repository (2017)

    Google Scholar 

  119. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugenics 7(2), 179–188 (1936)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arindam Basu .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Singapore Pte Ltd.

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Acharya, J., Basu, A. (2022). Neuromorphic Spiking Neural Network Algorithms. In: Thakor, N.V. (eds) Handbook of Neuroengineering. Springer, Singapore. https://doi.org/10.1007/978-981-15-2848-4_44-1

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-2848-4_44-1

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-2848-4

  • Online ISBN: 978-981-15-2848-4

  • eBook Packages: Springer Reference EngineeringReference Module Computer Science and Engineering

Publish with us

Policies and ethics

Navigation