Learning in Neuromorphic Systems

  • Chapter
  • First Online:
Neuromorphic Computing Principles and Organization

Abstract

The human brain is regarded as a power-efficient learning machine capable of carrying out complex computations while using only little resources. A sophisticated property that makes energy-efficient computation possible is the distinct sparse communication among many spiking neurons. The primary goal of neuromorphic hardware is to emulate brain-like neural networks to solve real-world problems. However, training on neuromorphic systems is challenging due to the required non-local computations of gradient-based learning algorithms. In Spiking neural networks, there are two fundamental modes: inference and learning. The learning phase, which minimizes a particular cost or loss function, is a complex process of acquiring the parameters to output the correct inference results. On the other hand, the inference computes the output values based on the given input and the network parameters. This chapter presents how learning in neuromorphic computing systems is conducted.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 50.28
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 64.19
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
EUR 90.94
Price includes VAT (Germany)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ben Abdallah A, Dang KN (2021) Toward robust cognitive 3d brain-inspired cross-paradigm system. Frontiers Neurosci 15:795

    Article  Google Scholar 

  2. Bohte SM, Kok JN, La Poutré JA (2000) Spikeprop: backpropagation for networks of spiking neurons. In: ESANN, vol 48, pp 17–37

    MATH  Google Scholar 

  3. Booij O, tat Nguyen H (2005) A gradient descent rule for spiking neurons emitting multiple spikes. Inf Process Lett 95(6):552–558

    Google Scholar 

  4. Brader JM, Senn W, Fusi S (2007) Learning real-world stimuli in a neural network with spike-driven synaptic dynamics. Neural Comput 19(11):2881–2912

    Article  MathSciNet  Google Scholar 

  5. Cai W, Ellinger F, Tetzlaff R (2014) Neuronal synapse as a memristor: Modeling pair-and triplet-based stdp rule. IEEE Trans Biomed Circuits Syst 9(1):87–95

    Article  Google Scholar 

  6. Cao Y, Chen Y, Khosla D (2015) Spiking deep convolutional neural networks for energy-efficient object recognition. Int J Comput Vis 113(1):54–66

    Article  MathSciNet  Google Scholar 

  7. Diehl PU, Neil D, Binas J, Cook M, Liu SC, Pfeiffer M (2015) Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: 2015 International joint conference on neural networks (IJCNN). IEEE, pp 1–8

    Google Scholar 

  8. Florian RV (2008) Tempotron-like learning with resume. In: International conference on artificial neural networks. Springer, pp 368–375

    Google Scholar 

  9. Gerstner W, Kempter R, Van Hemmen JL, Wagner H (1996) A neuronal learning rule for sub-millisecond temporal coding. Nature 383(6595):76–78

    Article  Google Scholar 

  10. Gütig R, Sompolinsky H (2006) The tempotron: a neuron that learns spike timing–based decisions. Nat Neurosci 9(3):420–428

    Article  Google Scholar 

  11. Han B, Srinivasan G, Roy K (2020) Rmp-snn: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 13558–13567

    Google Scholar 

  12. Hu Y, Tang H, Wang Y, Pan G (2018) Spiking deep residual network. Preprint. ar**v:1805.01352

    Google Scholar 

  13. Hunsberger E, Eliasmith C (2015) Spiking deep networks with LIF neurons. Preprint. ar**v:1510.08829

    Google Scholar 

  14. Ikechukwu OM, Dang KN, Abdallah AB (2021) On the design of a fault-tolerant scalable three dimensional NoC-based digital neuromorphic system with on-chip learning. IEEE Access 9:64331–64345

    Article  Google Scholar 

  15. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. Preprint. ar**v:1502.03167

    Google Scholar 

  16. Izhikevich EM (2007) Solving the distal reward problem through linkage of STDP and dopamine signaling. Cerebral Cortex 17(10):2443–2452

    Article  Google Scholar 

  17. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

    Google Scholar 

  18. Lee C, Sarwar SS, Panda P, Srinivasan G, Roy K (2020) Enabling spike-based backpropagation for training deep neural network architectures. Frontiers Neurosci 14, 119

    Article  Google Scholar 

  19. Masquelier T, Thorpe SJ (2007) Unsupervised learning of visual features through spike timing dependent plasticity. PLoS Comput Biol 3(2), e31

    Article  Google Scholar 

  20. McKennoch S, Liu D, Bushnell LG (2006) Fast modifications of the spikeprop algorithm. In: The 2006 IEEE international joint conference on neural network proceedings. IEEE, pp 3970–3977

    Google Scholar 

  21. Mikolov T, Kombrink S, Burget L, J Černockỳ, Khudanpur S (2011) Extensions of recurrent neural network language model. In: 2011 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 5528–5531

    Google Scholar 

  22. Morrison A, Diesmann M, Gerstner W (2008) Phenomenological models of synaptic plasticity based on spike timing. Biol Cybern 98(6):459–478

    Article  MathSciNet  Google Scholar 

  23. Nair V, Hinton GE (2010) Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp 807–814

    Google Scholar 

  24. Natschläger T, Ruf B (1998) Spatial and temporal pattern analysis via spiking neurons. Network Comput Neural Syst 9(3):319–332

    Article  Google Scholar 

  25. Neil D, Pfeiffer M, Liu SC (2016) Learning to be efficient: Algorithms for training low-latency, low-compute deep spiking neural networks. In Proceedings of the 31st annual ACM symposium on applied computing, pp 293–298

    Google Scholar 

  26. Nessler B, Pfeiffer M, Maass W (2009) STDP enables spiking neurons to detect hidden causes of their inputs. In: Advances in neural information processing systems, pp 1357–1365

    Google Scholar 

  27. Orchard G, Meyer C, R Etienne-Cummings, Posch C, Thakor N, Benosman R (2015) Hfirst: a temporal approach to object recognition. IEEE Trans Pattern Anal Mach Intell 37(10):2028–2040

    Article  Google Scholar 

  28. Pérez-Carrasco JA, Zhao B, Serrano C, Acha B, Serrano-Gotarredona T, Chen S, Linares-Barranco B (2013) Map** from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing–application to feedforward convnets. IEEE Trans Pattern Anal Mach Intell 35(11):2706–2719

    Article  Google Scholar 

  29. Pfister JP, Gerstner W (2006) Triplets of spikes in a model of spike timing-dependent plasticity. J Neurosci 26(38):9673–9682

    Article  Google Scholar 

  30. Ponulak F (2006) Supervised learning in spiking neural networks with resume method. Phd, Poznan University of Technology 46:47

    Google Scholar 

  31. Riedmiller M, Braun H (1993) A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: IEEE international conference on neural networks. IEEE, pp 586–591

    Google Scholar 

  32. Roy K, Jaiswal A, Panda P (2019) Towards spike-based machine intelligence with neuromorphic computing. Nature 575(7784):607–617

    Article  Google Scholar 

  33. Rueckauer B, Liu SC (2018) Conversion of analog to spiking neural networks using sparse temporal coding. In: 2018 IEEE international symposium on circuits and systems (ISCAS). IEEE, pp 1–5

    Google Scholar 

  34. Rueckauer B, Lungu IA, Hu Y, Pfeiffer M, Liu SC (2017) Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers Neurosci 11:682

    Article  Google Scholar 

  35. Scherer D, Müller A, Behnke S (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In International conference on artificial neural networks. Springer, pp 92–101

    Google Scholar 

  36. Schrauwen B, Van Campenhout J (2004) Extending spikeprop. In 2004 IEEE international joint conference on neural networks (IEEE Cat. No. 04CH37541). IEEE, vol 1, pp 471–475

    Google Scholar 

  37. Schultz W (1998) Predictive reward signal of dopamine neurons. J Neurophysiol 80(1):1–27

    Article  Google Scholar 

  38. Sengupta A, Ye Y, Wang R, Liu C, Roy K (2019) Going deeper in spiking neural networks: VGG and residual architectures. Frontiers Neurosci 13:95

    Article  Google Scholar 

  39. Shrestha SB, Song Q (2015) Adaptive learning rate of spikeprop based on weight convergence analysis. Neural Netw 63:185–198

    Article  Google Scholar 

  40. Shrestha S, Song Q (2017) Robust learning in spikeprop. Neural Netw 86:54–68

    Article  Google Scholar 

  41. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Preprint. ar**v:1409.1556

    Google Scholar 

  42. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  43. Tang J, Yuan F, Shen X, Wang Z, Rao M, He Y, Sun Y, Li X, Zhang W, Li Y, et al (2019) Bridging biological and artificial neural networks with emerging neuromorphic devices: fundamentals, progress, and challenges. Adv Mater 31(49):1902761

    Article  Google Scholar 

  44. Vu TH, Ikechukwu OM, Abdallah AB (2019) Fault-tolerant spike routing algorithm and architecture for three dimensional NoC-based neuromorphic systems. IEEE Access 7:90436–90452

    Article  Google Scholar 

  45. Vu TH, Murakami Y, Abdallah AB (2019) Graceful fault-tolerant on-chip spike routing algorithm for mesh-based spiking neural networks. In: 2019 2nd International conference on intelligent autonomous systems (ICoIAS), Singapore, February 2019

    Google Scholar 

  46. Vu TH, Murakami Y, Abdallah AB (2019) A low-latency tree-based multicast spike routing for scalable multicore neuromorphic chips. In: ACM 5th international conference of computing for engineering and sciences, Hammamet, Tunisia, July 2019

    Google Scholar 

  47. Vu TH, Okuyama Y, Abdallah AB (2019) Comprehensive analytic performance assessment and k-means based multicast routing algorithm and architecture for 3d-NoC of spiking neurons. ACM J Emerg Technol Comput Syst 15(4):1–28

    Article  Google Scholar 

  48. Wang HX, Gerkin RC, Nauen DW, Bi GQ (2005) Coactivation and timing-dependent integration of synaptic potentiation and depression. Nat Neurosci 8(2):187–193

    Article  Google Scholar 

  49. Wu J, Chua Y, Zhang M, Yang Q, Li G, Li H (2019) Deep spiking neural network with spike count based learning rule. In 2019 International joint conference on neural networks (IJCNN). IEEE, pp 1–6

    Google Scholar 

  50. **n J, Embrechts MJ (2001) Supervised learning with spiking neural networks. In: IJCNN’01. International joint conference on neural networks. Proceedings (Cat. No. 01CH37222). IEEE, vol 3, pp 1772–1777

    Google Scholar 

  51. Yan H, Liu X, Huo H, Fang T (2019) Mechanisms of reward-modulated STDP and winner-take-all in bayesian spiking decision-making circuit. In: Neural information processing. Springer International Publishing, pp 162–172

    Google Scholar 

  52. Yu AJ, Giese MA, Poggio TA (2002) Biophysiologically plausible implementations of the maximum operation. Neural Comput 14(12):2857–2881

    Article  Google Scholar 

  53. Zambrano D, Bohte SM (2016) Fast and efficient asynchronous neural computation with adapting spiking neural networks. Preprint. ar**v:1609.02053

    Google Scholar 

  54. Zhang L, Zhou S, Zhi T, Du Z, Chen Y (2019) Tdsnn: From deep neural networks to deep spike neural networks with temporal-coding. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 1319–1326

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Abderazek Ben Abdallah or Khanh N. Dang .

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ben Abdallah, A., N. Dang, K. (2022). Learning in Neuromorphic Systems. In: Neuromorphic Computing Principles and Organization. Springer, Cham. https://doi.org/10.1007/978-3-030-92525-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92525-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92524-6

  • Online ISBN: 978-3-030-92525-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation