Accelerated Optimization for Simulation of Brain Spiking Neural Network on GPGPUs

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14492))

  • 260 Accesses

Abstract

As the application scenarios for large-scale spiking neural networks (SNN) increase, efficient SNN simulation becomes more essential. However, simulating such a large-scale network faces expensive overhead in terms of computation and communication, especially for high firing rates. To address this problem, we propose an effective accelerated optimization method for simulating SNN on GPGPUs, which simultaneously takes into account workload balancing and communication overhead. We design a workload-oriented network partition algorithm to minimize the number of external synapses and ensure workload balance. Additionally, we propose spike synchronization optimization by incorporating fine-grained scale, data compression, and full-duplex communication. This optimization aims to achieve lower communication overhead and better performance improvement. Furthermore, to avoid thread warp divergence, we assign an entire thread block for each neuron without collecting information on fired neurons in the spike propagation phase, which simplifies the execution flow and enhances performance. Experimental results demonstrate that our simulator can achieve up to \(1.31{\times }{\backsim }6.74{\times }\) speedup for SNN with different configurations, and the efficiency is improved by \(40.21\%{\backsim }51.11\%\) compared with the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 60.98
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 79.17
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahmad, N., Isbister, J.B., Smithe, T., Stringer, S.M.: Spike: a GPU optimised spiking neural network simulator. Cold Spring Harbor Laboratory (2018). https://doi.org/10.1101/461160

  2. Akopyan, F., Sawada, J., Cassidy, A., Alvarez-Icaza, R., Modha, D.S.: TrueNorth: design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015). https://doi.org/10.1109/TCAD.2015.2474396

    Article  Google Scholar 

  3. Balaji, A., et al.: PyCARL: a PyNN interface for hardware-software co-simulation of spiking neural network (2020). https://doi.org/10.48550/ar**v.2003.09696

  4. Balaji, N., Yavuz, E., Nowotny, T.: Scalability and optimization strategies for GPU enhanced neural networks (GENN). Comput. Sci. (2014). https://doi.org/10.48550/ar**v.1412.0595

  5. Bautembach, D., Oikonomidis, I., Kyriazis, N., Argyros, A.: Faster and simpler SNN simulation with work queues. In: 2020 International Joint Conference on Neural Networks (IJCNN) (2020). https://doi.org/10.1109/IJCNN48605.2020.9206752

  6. Bautembach, D., Oikonomidis, I., Argyros, A.: Multi-GPU SNN simulation with static load balancing. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021). https://doi.org/10.1109/IJCNN52387.2021.9533921

  7. Bekolay, T., et al.: Nengo: a Python tool for building large-scale functional brain models. Front. Neuroinform. 7, 48 (2014). https://doi.org/10.3389/fninf.2013.00048

    Article  Google Scholar 

  8. Brunel, N.: Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208 (2000). https://doi.org/10.1023/A:1008925309027

    Article  Google Scholar 

  9. Chou, T., Kashyap, H., **ng, J., Listopad, S., Rounds, E.L.: CARLsim 4: an open source library for large scale, biologically detailed spiking neural network simulation using heterogeneous clusters. In: IEEE International Joint Conference on Neural Networks (2018). https://doi.org/10.1109/IJCNN.2018.8489326

  10. Eppler, J., Helias, M., Muller, E., Diesmann, M., Gewaltig, M.O.: PyNEST: a convenient interface to the nest simulator. Front. Neuroinf. 2 (2009). https://doi.org/10.3389/neuro.11.012.2008

  11. Gewaltig, M.O., Diesmann, M.: NEST (neural simulation tool). Scholarpedia 2(4), 1430 (2007). https://doi.org/10.4249/scholarpedia.1430

    Article  Google Scholar 

  12. Goodman, D., Brette, R.: Brian: a simulator for spiking neural networks in Python. Front. Neuroinf. 2 (2008). https://doi.org/10.3389/neuro.11.005.2008

  13. Goodman, D.F.M., Brette, R.: The brian simulator. Front. Neurosci. 3(2) (2009). https://doi.org/10.3389/neuro.01.026.2009

  14. Hazan, H., et al.: BindsNET: a machine learning-oriented spiking neural networks library in Python. Front. Neuroinf. 12 (2018). https://doi.org/10.3389/fninf.2018.00089

  15. Karypis, G., Kumar, V.: A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20(1), 359–392 (1998). https://doi.org/10.1137/S1064827595287997

    Article  MathSciNet  Google Scholar 

  16. Kasap, B., Opstal, A.V.: Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations. Neurocomputing, S0925231218304168 (2018). https://doi.org/10.1016/j.neucom.2018.04.007

  17. Knight, J.C., Nowotny, T.: GPUs outperform current HPC and neuromorphic solutions in terms of speed and energy when simulating a highly-connected cortical model. Front. Neurosci. 12 (2018). https://doi.org/10.3389/fnins.2018.00941

  18. Lee, H., Kim, C., Kim, M., Chung, Y., Kim, J.: NeuroSync: a scalable and accurate brain simulator using safe and efficient speculation. In: 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp. 633–647. IEEE (2022). https://doi.org/10.1109/HPCA53966.2022.00053

  19. Lin, C.K., et al.: Programming spiking neural networks on Intel’s Loihi. Computer 51, 52–61 (2018). https://doi.org/10.1109/MC.2018.157113521

  20. Mozafari, M., Ganjtabesh, M., Nowzari-Dalini, A., Masquelier, T.: SpykeTorch: efficient simulation of convolutional spiking neural networks with at most one spike per neuron. Front. Neurosci. 13, 625 (2019). https://doi.org/10.3389/fnins.2019.00625

  21. Panagiotou, S., Miedema, R., Sidiropoulos, H., Smaragdos, G., Soudris, D.: A novel simulator for extended Hodgkin-Huxley neural networks. In: 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (2020). https://doi.org/10.1109/BIBE50027.2020.00071

  22. Qu, P., Zhang, Y., Fei, X., Zheng, W.: High performance simulation of spiking neural network on GPGPUs. IEEE Trans. Parallel Distrib. Syst. 31, 2510–2523(2020). https://doi.org/10.1109/TPDS.2020.2994123

  23. Sakemi, Y., Morino, K., Morie, T., Aihara, K.: A supervised learning algorithm for multilayer spiking neural networks based on temporal coding toward energy-efficient VLSI processor design. IEEE Trans. Neural Netw. Learn. Syst. (2021). https://doi.org/10.1109/TNNLS.2021.3095068

  24. Shang, Y., Li, Y., You, F., Zhao, R.L.: Conversion-based approach to obtain an SNN construction. Int. J. Software Eng. Knowl. Eng. (2021). https://doi.org/10.1142/S0218194020400318

    Article  Google Scholar 

  25. Smaragdos, G., et al.: BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations. J. Neural Eng. 14(6), 066008.1–066008.15 (2017). https://doi.org/10.1088/1741-2552/aa7fc5

  26. Sripad, A., Sanchez, G., Zapata, M., Pirrone, V., Madrenas, J.: SNAVA-a real-time multi-FPGA multi-model spiking neural network simulation architecture. Neural Netw. 97, 28–45 (2018). https://doi.org/10.1016/j.neunet.2017.09.011

    Article  Google Scholar 

  27. Stewart, T.C., Tripp, B., Eliasmith, C.: Python scripting in the Nengo simulator. Front. Neuroinf., 7 (2009). https://doi.org/10.3389/neuro.11.007.2009

  28. Stimberg, M., Brette, R., Dan, F.: Brian 2: an intuitive and efficient neural simulator. Cold Spring Harbor Laboratory (2019). https://doi.org/10.7554/eLife.47314

Download references

Acknowledgements

This work is supported in part by the Open Project Program for the Engineering Research Center of Software/Hardware Co-design Technology and Application, Ministry of Education (East China Normal University), Grant No. 67000-42990016, and in part by Fundamental Research Funds for the Central Universities, Sun Yat-sen University, Grant No. 23qnpy30/67000-31610023.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kai Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, F., Cui, M., Zhang, J., Ling, Y., Liu, H., Huang, K. (2024). Accelerated Optimization for Simulation of Brain Spiking Neural Network on GPGPUs. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14492. Springer, Singapore. https://doi.org/10.1007/978-981-97-0811-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0811-6_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0810-9

  • Online ISBN: 978-981-97-0811-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation