Training Generative Adversarial Networks (GANs) Over Parameter Server and Worker Node Architecture

  • Conference paper
  • First Online:
Machine Learning and Big Data Analytics (ICMLBDA 2022)

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 401))

Included in the following conference series:

  • 460 Accesses

Abstract

The latest technological discovery in the field of artificial intelligence (AI) is the learning and widespread use of different Generative Adversarial Networks (GANs) applications. GANs have made progress in numerous applications like image editing, style transfer, scene generation, so on. However, these types of generative models demand high computation because GANs are made out of two deep neural networks and in light of the fact that it trains on huge datasets. As with other AI models, GANs also face problems of insufficient data while training for some real-world situations. In numerous situations, available databases might be restricted and distributed over various worker nodes (i.e., end users) where the local datasets are intrinsically private and ultimately workers toward the end do not want to share them. In this chapter, we addressed the issue of training GANs in a distributed way so that they can train over datasets that are distributed to various worker nodes. We have developed a training framework for GANs under the setting of the parameter server and worker node. Under this framework, various workers can produce results similar to real data while kee** it completely in a distributed way and also kee** their information confidential. Test results obtained with the CIFAR-10 dataset indicate that our architecture can produce high-quality data samples that look similar to real data and can be used in various real-life applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative adversarial nets.” In Advances in neural information processing systems, pp. 2672–2680. 2014.

    Google Scholar 

  2. Ranjan, Amit, Debanshu Lalwani, and Rajiv Misra. “GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment.” Magnetic Resonance Materials in Physics, Biology and Medicine pp. 1–9. 2021.

    Google Scholar 

  3. Mirza, Mehdi, and Simon Osindero. “Conditional generative adversarial nets.” ar**v preprint ar**v:1411.1784 (2014).

    Google Scholar 

  4. Deb, Sagar Deep, et al. “Facial Expression Classification using Multi-Scale Histogram of Oriented Gradients.” International Journal of Image Processing and Pattern Recognition 6.1 (2020): 5–13.

    Google Scholar 

  5. Ranjan, Amit, et al. “Generating novel molecule for target protein (SARS-CoV-2) using drug–target interaction based on graph neural network.” Network Modeling Analysis in Health Informatics and Bioinformatics 11.1 (2022): 1–11.

    Article  MathSciNet  Google Scholar 

  6. Hsieh, Kevin, Aaron Harlap, Nandita Vijaykumar, Dimitris Konomis, Gregory R. Ganger, Phillip B. Gibbons, and Onur Mutlu. “Gaia: Geo-distributed machine learning approaching LAN speeds.” In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), pp. 629–647. 2017.

    Google Scholar 

  7. Dean, Jeffrey, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’aurelio Ranzato et al. “Large scale distributed deep networks.” In Advances in neural information processing systems, pp. 1223–1231. 2012.

    Google Scholar 

  8. Konečný, Jakub, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. “Federated learning: Strategies for improving communication efficiency.” ar**v preprint ar**v:1610.05492 (2018).

    Google Scholar 

  9. Hoang, Quan, Tu Dinh Nguyen, Trung Le, and Dinh Phung. “Multi-generator generative adversarial nets.” ar**v preprint ar**v:1708.02556 (2017).

    Google Scholar 

  10. Durugkar, Ishan, Ian Gemp, and Sridhar Mahadevan. “Generative multi-adversarial networks.” ar**v preprint ar**v:1611.01673 (2018).

    Google Scholar 

  11. Ranjan, Amit, et al. “Transfer Learning Based Approach for Pneumonia Detection Using Customized VGG16 Deep Learning Model.” International Conference on Internet of Things and Connected Technologies. Springer, Cham, 2021.

    Google Scholar 

  12. Ghosh, Arnab, Viveka Kulharia, Vinay P. Namboodiri, Philip HS Torr, and Puneet K. Dokania. “Multi-agent diverse generative adversarial networks.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8513–8521. 2018.

    Google Scholar 

  13. Deb, Sagar Deep, et al. “A multi model ensemble based deep convolution neural network structure for detection of COVID19.” Biomedical Signal Processing and Control 71 (2022): 103126.

    Google Scholar 

  14. Yonetani, Ryo, Tomohiro Takahashi, Atsushi Hashimoto, and Yoshitaka Ushiku. “Decentralized Learning of Generative Adversarial Networks from Non-iid Data.” ar**v preprint ar**v:1905.09684 (2019).

    Google Scholar 

  15. Fan, Chenyou, and ** Liu. “Federated Generative Adversarial Learning.” ar**v preprint ar**v:2005.03793 (2020).

    Google Scholar 

  16. A. Krizhevsky, “Learning multiple layers of features from tiny images,” 2009.

    Google Scholar 

  17. Radford, Alec, Luke Metz, and Soumith Chintala. “Unsupervised representation learning with deep convolutional generative adversarial networks.” ar**v preprint ar**v:1511.06434 (2015).

    Google Scholar 

  18. Yang, Chao-Tung, et al. “An energy-efficient cloud system with novel dynamic resource allocation methods.” The Journal of Supercomputing 75.8 (2019): 4408–4429.

    Article  Google Scholar 

  19. Verma, Vinod Kumar, et al. “Next-generation Internet of things and cloud security solutions.” International Journal of Distributed Sensor Networks 15.3 (2019): 1550147719835098.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amit Ranjan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ranjan, A., Misra, R. (2023). Training Generative Adversarial Networks (GANs) Over Parameter Server and Worker Node Architecture. In: Misra, R., Omer, R., Rajarajan, M., Veeravalli, B., Kesswani, N., Mishra, P. (eds) Machine Learning and Big Data Analytics. ICMLBDA 2022. Springer Proceedings in Mathematics & Statistics, vol 401. Springer, Cham. https://doi.org/10.1007/978-3-031-15175-0_33

Download citation

Publish with us

Policies and ethics

Navigation