Artificial Intelligence Training at Edge

  • Chapter
  • First Online:
Edge AI

Abstract

Present cloud training (or cloud–edge training) is facing challenges in AI services requiring continuous learning and data privacy. Naturally, the edge architecture, which consists of a large number of edge nodes with modest computing resources, can cater for alleviating the pressure of networks and protecting data privacy by processing the data or training at themselves. Training at the edge or potentially among “end–edge–cloud,” treating the edge as the core architecture of training, is called “AI Training at Edge.” Such kind of training may require significant resources to digest distributed data and exchange updates in the hierarchical structure. Especially, FL is an emerging distributed learning setting and is promising to address these issues. For devices with diverse capabilities and limited network conditions in edge computing, FL can protect privacy in the time of handling non-IID training data, and has promising scalability in terms of efficient communication, resource optimization and security. As the principal content of this chapter, some selected works on FL are listed in the first table in this chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Thailand)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 128.39
Price includes VAT (Thailand)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 159.99
Price excludes VAT (Thailand)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
EUR 159.99
Price excludes VAT (Thailand)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Y. Huang, Y. Zhu, X. Fan, et al., Task scheduling with optimized transmission time in collaborative cloud–edge learning, in Proceedings of the 27th International Conference on Computer Communication and Networks (ICCCN 2018) (2018), pp. 1–9

    Google Scholar 

  2. Why continuous learning is key to AI. Available: https://www.oreilly.com/radar/why-continuous-learning-is-key-to-ai/

  3. G. Kamath, P. Agnihotri, M. Valero, et al., Pushing analytics to the edge, in 2016 IEEE Global Communications Conference (GLOBECOM 2016) (2016), pp. 1–6

    Google Scholar 

  4. L. Valerio, A. Passarella, M. Conti, A communication efficient distributed learning framework for smart environments. Pervasive Mob. Comput. 41, 46–68 (2017)

    Article  Google Scholar 

  5. H.B. McMahan, E. Moore, D. Ramage, et al., Communication-efficient learning of deep networks from decentralized data, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017) (2017), pp. 1273–1282

    Google Scholar 

  6. K. Bonawitz, H. Eichner, et al., Towards federated learning at scale: System design (2019). Preprint. ar**v:1902.01046

    Google Scholar 

  7. M.S.H. Abad, E. Ozfatura, D. Gunduz, O. Ercetin, Hierarchical federated learning across heterogeneous cellular networks (2019). Preprint. ar**v: 1909.02362

    Google Scholar 

  8. J. Konečný, H.B. McMahan, F.X. Yu, et al., Federated learning: Strategies for improving communication efficiency (2016). Preprint. ar**v:1610.05492

    Google Scholar 

  9. S. Caldas, J. Konečny, H.B. McMahan, A. Talwalkar, Expanding the reach of federated learning by reducing client resource requirements (2018). Preprint. ar**v:1812.07210

    Google Scholar 

  10. H. Hu, D. Wang, C. Wu, Distributed machine learning through heterogeneous edge systems (2019). Preprint. ar**v:1911.06949

    Google Scholar 

  11. S. Wang, T. Tuor, T. Salonidis, et al., When edge meets learning: Adaptive control for resource-constrained distributed machine learning, in IEEE Conference on Computer Communications (INFOCOM 2018) (2018), pp. 63–71

    Google Scholar 

  12. A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, R. Pedarsani, FedPAQ: A Communication-efficient federated learning method with periodic averaging and quantization (2019). Preprint. ar**v:1909.13014

    Google Scholar 

  13. M. Duan, Astraea: Self-balancing federated learning for improving classification accuracy of mobile deep learning applications (2019). Preprint. ar**v:1907.01132

    Google Scholar 

  14. Y. Jiang, S. Wang, B.J. Ko, W.-H. Lee, L. Tassiulas, Model pruning enables efficient federated learning on edge devices (2019). Preprint. ar**v:1909.12326

    Google Scholar 

  15. Z. Xu, Z. Yang, J. **ong, J. Yang, X. Chen, ELFISH: Resource-aware federated learning on heterogeneous edge devices (2019). Preprint. ar**v:1912.01684

    Google Scholar 

  16. C. Dinh, N.H. Tran, M.N.H. Nguyen, C.S. Hong, W. Bao, A.Y. Zomaya, V. Gramoli, Federated learning over wireless networks: Convergence analysis and resource allocation (2019). Preprint. ar**v:1910.13067

    Google Scholar 

  17. M. Chen, Z. Yang, W. Saad, C. Yin, H.V. Poor, S. Cui, A joint learning and communications framework for federated learning over wireless networks (2019). Preprint. ar**v:1909.07972

    Google Scholar 

  18. T. Li, M. Sanjabi, V. Smith, Fair resource allocation in federated learning (2019). Preprint. ar**v:1905.10497

    Google Scholar 

  19. C. **e, S. Koyejo, I. Gupta, Practical distributed learning: Secure machine learning with communication-efficient local updates (2019). Preprint. ar**v:1903.06996

    Google Scholar 

  20. C. Fung, C.J.M. Yoon, I. Beschastnikh, Mitigating sybils in federated learning poisoning (2018). Preprint. ar**v:1808.04866

    Google Scholar 

  21. F. Ang, L. Chen, N. Zhao, et al., Robust federated learning with noisy communication (2019). Preprint. ar**v:1911.00251

    Google Scholar 

  22. K. Bonawitz, V. Ivanov, B. Kreuter, et al., Practical secure aggregation for privacy-preserving machine learning, in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS 2017) (2017), pp. 1175–1191

    Google Scholar 

  23. H. Kim, J. Park, M. Bennis, S.-L. Kim, On-device federated learning via blockchain and its latency analysis (2018). Preprint. ar**v:1808.03949

    Google Scholar 

  24. Y. Lin, S. Han, H. Mao, et al., Deep gradient compression: reducing the communication bandwidth for distributed training (2017). eprint ar**v:1712.01887

    Google Scholar 

  25. Z. Tao, C. William, eSGD: Communication efficient distributed deep learning on the edge, in {USENIX} Workshop on Hot Topics in Edge Computing (HotEdge) (2018), pp. 1–6

    Google Scholar 

  26. N. Strom, Scalable distributed DNN training using commodity GPU cloud computing, in 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015) (2015), pp. 1488–1492

    Google Scholar 

  27. E. Jeong, S. Oh, H. Kim, et al., Communication-efficient on-device machine learning: Federated distillation and augmentation under non-IID private data (2018). Preprint. ar**v:1811.11479.

    Google Scholar 

  28. M. Fredrikson, S. Jha, T. Ristenpart, Model inversion attacks that exploit confidence information and basic countermeasures, in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS 2015) (2015), pp. 1322–1333

    Google Scholar 

  29. M. Du, K. Wang, Z. **a, Y. Zhang, Differential privacy preserving of training model in wireless big data with edge computing. IEEE Trans. Big Data 6, 283–295 (2018). (Early Access)

    Google Scholar 

  30. C. Dwork, F. McSherry, K. Nissim, A. Smith, Calibrating noise to sensitivity in private data analysis, in Theory of Cryptography (Springer, Berlin, 2006), pp. 265–284

    MATH  Google Scholar 

  31. S. Samarakoon, M. Bennis, W. Saad, M. Debbah, Distributed federated learning for ultra-reliable low-latency vehicular communications. IEEE Trans. Commun. 68, 1146–1159 (2020, Early Access)

    Google Scholar 

  32. B.S. Kashin, Diameters of some finite-dimensional sets and classes of smooth functions. Izv. Akad. Nauk SSSR Ser. Mat. 41, 334–351 (1977)

    MathSciNet  Google Scholar 

  33. S. Wang, T. Tuor, T. Salonidis, et al., Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 37(6), 1205–1221 (2019)

    Article  Google Scholar 

  34. T. Tuor, S. Wang, T. Salonidis, et al., Demo abstract: Distributed machine learning at resource-limited edge nodes, in 2018 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS 2018) (2018), pp. 1–2

    Google Scholar 

  35. S. Kullback, R.A. Leibler, On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)

    Article  MathSciNet  Google Scholar 

  36. K. Yang, T. Jiang, Y. Shi, Z. Ding, Federated learning via over-the-air computation (2018). Preprint. ar**v:1812.11750

    Google Scholar 

  37. B. Nazer, et al., Computation over multiple-access channels. IEEE Trans. Inf. Theory 53(10), 3498–3516 (2007)

    Article  MathSciNet  Google Scholar 

  38. L. Chen, N. Zhao, Y. Chen, et al., Over-the-air computation for IoT networks: computing multiple functions with antenna arrays. IEEE Internet Things J. 5(6), 5296–5306 (2018)

    Article  Google Scholar 

  39. G. Zhu, Y. Wang, K. Huang, Broadband analog aggregation for low-latency federated edge learning (extended version) (2018). Preprint. ar**v:1812.11494

    Google Scholar 

  40. J.E. Stiglitz, Self-selection and Pareto efficient taxation. J. Public Econ. 17(2), 213–240 (1982)

    Article  MathSciNet  Google Scholar 

  41. H.W. Kuhn, The hungarian method for the assignment problem. Nav. Res. Logist. Q. 2(1-Ř-2), 83–97 (1955)

    Google Scholar 

  42. H. SHI, R.V. Prasad, E. Onur, I.G.M.M. Niemegeers, Fairness in wireless networks:issues, measures and challenges. IEEE Commun. Surv. Tutor. 16(1), 5–24 (First Quarter 2014)

    Google Scholar 

  43. H.B. McMahan, G. Andrew, U. Erlingsson, et al., A general approach to adding differential privacy to iterative training procedures (2018). Preprint. ar**v:1812.06210 (2018)

    Google Scholar 

  44. A. Bhowmick, J. Duchi, J. Freudiger, G. Kapoor, R. Rogers, Protection against reconstruction and its applications in private federated learning (2018). Preprint. ar**v:1812.00984

    Google Scholar 

  45. A. Cheu, A. Smith, J. Ullman, D. Zeber, M. Zhilyaev, Distributed differential privacy via shuffling (2018). Preprint. ar**v:1808.01394

    Google Scholar 

  46. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, V. Shmatikov, How to backdoor federated learning (2018). Preprint. ar**v:1807.00459

    Google Scholar 

  47. Z. Sun, P. Kairouz, A.T. Suresh, H.B. McMahan, Can you really backdoor federated learning? (2019). Preprint. ar**v:1911.07963

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wang, X., Han, Y., Leung, V.C.M., Niyato, D., Yan, X., Chen, X. (2020). Artificial Intelligence Training at Edge. In: Edge AI. Springer, Singapore. https://doi.org/10.1007/978-981-15-6186-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-6186-3_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-6185-6

  • Online ISBN: 978-981-15-6186-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation