Abstract
Present cloud training (or cloud–edge training) is facing challenges in AI services requiring continuous learning and data privacy. Naturally, the edge architecture, which consists of a large number of edge nodes with modest computing resources, can cater for alleviating the pressure of networks and protecting data privacy by processing the data or training at themselves. Training at the edge or potentially among “end–edge–cloud,” treating the edge as the core architecture of training, is called “AI Training at Edge.” Such kind of training may require significant resources to digest distributed data and exchange updates in the hierarchical structure. Especially, FL is an emerging distributed learning setting and is promising to address these issues. For devices with diverse capabilities and limited network conditions in edge computing, FL can protect privacy in the time of handling non-IID training data, and has promising scalability in terms of efficient communication, resource optimization and security. As the principal content of this chapter, some selected works on FL are listed in the first table in this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Y. Huang, Y. Zhu, X. Fan, et al., Task scheduling with optimized transmission time in collaborative cloud–edge learning, in Proceedings of the 27th International Conference on Computer Communication and Networks (ICCCN 2018) (2018), pp. 1–9
Why continuous learning is key to AI. Available: https://www.oreilly.com/radar/why-continuous-learning-is-key-to-ai/
G. Kamath, P. Agnihotri, M. Valero, et al., Pushing analytics to the edge, in 2016 IEEE Global Communications Conference (GLOBECOM 2016) (2016), pp. 1–6
L. Valerio, A. Passarella, M. Conti, A communication efficient distributed learning framework for smart environments. Pervasive Mob. Comput. 41, 46–68 (2017)
H.B. McMahan, E. Moore, D. Ramage, et al., Communication-efficient learning of deep networks from decentralized data, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017) (2017), pp. 1273–1282
K. Bonawitz, H. Eichner, et al., Towards federated learning at scale: System design (2019). Preprint. ar**v:1902.01046
M.S.H. Abad, E. Ozfatura, D. Gunduz, O. Ercetin, Hierarchical federated learning across heterogeneous cellular networks (2019). Preprint. ar**v: 1909.02362
J. Konečný, H.B. McMahan, F.X. Yu, et al., Federated learning: Strategies for improving communication efficiency (2016). Preprint. ar**v:1610.05492
S. Caldas, J. Konečny, H.B. McMahan, A. Talwalkar, Expanding the reach of federated learning by reducing client resource requirements (2018). Preprint. ar**v:1812.07210
H. Hu, D. Wang, C. Wu, Distributed machine learning through heterogeneous edge systems (2019). Preprint. ar**v:1911.06949
S. Wang, T. Tuor, T. Salonidis, et al., When edge meets learning: Adaptive control for resource-constrained distributed machine learning, in IEEE Conference on Computer Communications (INFOCOM 2018) (2018), pp. 63–71
A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, R. Pedarsani, FedPAQ: A Communication-efficient federated learning method with periodic averaging and quantization (2019). Preprint. ar**v:1909.13014
M. Duan, Astraea: Self-balancing federated learning for improving classification accuracy of mobile deep learning applications (2019). Preprint. ar**v:1907.01132
Y. Jiang, S. Wang, B.J. Ko, W.-H. Lee, L. Tassiulas, Model pruning enables efficient federated learning on edge devices (2019). Preprint. ar**v:1909.12326
Z. Xu, Z. Yang, J. **ong, J. Yang, X. Chen, ELFISH: Resource-aware federated learning on heterogeneous edge devices (2019). Preprint. ar**v:1912.01684
C. Dinh, N.H. Tran, M.N.H. Nguyen, C.S. Hong, W. Bao, A.Y. Zomaya, V. Gramoli, Federated learning over wireless networks: Convergence analysis and resource allocation (2019). Preprint. ar**v:1910.13067
M. Chen, Z. Yang, W. Saad, C. Yin, H.V. Poor, S. Cui, A joint learning and communications framework for federated learning over wireless networks (2019). Preprint. ar**v:1909.07972
T. Li, M. Sanjabi, V. Smith, Fair resource allocation in federated learning (2019). Preprint. ar**v:1905.10497
C. **e, S. Koyejo, I. Gupta, Practical distributed learning: Secure machine learning with communication-efficient local updates (2019). Preprint. ar**v:1903.06996
C. Fung, C.J.M. Yoon, I. Beschastnikh, Mitigating sybils in federated learning poisoning (2018). Preprint. ar**v:1808.04866
F. Ang, L. Chen, N. Zhao, et al., Robust federated learning with noisy communication (2019). Preprint. ar**v:1911.00251
K. Bonawitz, V. Ivanov, B. Kreuter, et al., Practical secure aggregation for privacy-preserving machine learning, in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS 2017) (2017), pp. 1175–1191
H. Kim, J. Park, M. Bennis, S.-L. Kim, On-device federated learning via blockchain and its latency analysis (2018). Preprint. ar**v:1808.03949
Y. Lin, S. Han, H. Mao, et al., Deep gradient compression: reducing the communication bandwidth for distributed training (2017). eprint ar**v:1712.01887
Z. Tao, C. William, eSGD: Communication efficient distributed deep learning on the edge, in {USENIX} Workshop on Hot Topics in Edge Computing (HotEdge) (2018), pp. 1–6
N. Strom, Scalable distributed DNN training using commodity GPU cloud computing, in 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015) (2015), pp. 1488–1492
E. Jeong, S. Oh, H. Kim, et al., Communication-efficient on-device machine learning: Federated distillation and augmentation under non-IID private data (2018). Preprint. ar**v:1811.11479.
M. Fredrikson, S. Jha, T. Ristenpart, Model inversion attacks that exploit confidence information and basic countermeasures, in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS 2015) (2015), pp. 1322–1333
M. Du, K. Wang, Z. **a, Y. Zhang, Differential privacy preserving of training model in wireless big data with edge computing. IEEE Trans. Big Data 6, 283–295 (2018). (Early Access)
C. Dwork, F. McSherry, K. Nissim, A. Smith, Calibrating noise to sensitivity in private data analysis, in Theory of Cryptography (Springer, Berlin, 2006), pp. 265–284
S. Samarakoon, M. Bennis, W. Saad, M. Debbah, Distributed federated learning for ultra-reliable low-latency vehicular communications. IEEE Trans. Commun. 68, 1146–1159 (2020, Early Access)
B.S. Kashin, Diameters of some finite-dimensional sets and classes of smooth functions. Izv. Akad. Nauk SSSR Ser. Mat. 41, 334–351 (1977)
S. Wang, T. Tuor, T. Salonidis, et al., Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 37(6), 1205–1221 (2019)
T. Tuor, S. Wang, T. Salonidis, et al., Demo abstract: Distributed machine learning at resource-limited edge nodes, in 2018 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS 2018) (2018), pp. 1–2
S. Kullback, R.A. Leibler, On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)
K. Yang, T. Jiang, Y. Shi, Z. Ding, Federated learning via over-the-air computation (2018). Preprint. ar**v:1812.11750
B. Nazer, et al., Computation over multiple-access channels. IEEE Trans. Inf. Theory 53(10), 3498–3516 (2007)
L. Chen, N. Zhao, Y. Chen, et al., Over-the-air computation for IoT networks: computing multiple functions with antenna arrays. IEEE Internet Things J. 5(6), 5296–5306 (2018)
G. Zhu, Y. Wang, K. Huang, Broadband analog aggregation for low-latency federated edge learning (extended version) (2018). Preprint. ar**v:1812.11494
J.E. Stiglitz, Self-selection and Pareto efficient taxation. J. Public Econ. 17(2), 213–240 (1982)
H.W. Kuhn, The hungarian method for the assignment problem. Nav. Res. Logist. Q. 2(1-Ř-2), 83–97 (1955)
H. SHI, R.V. Prasad, E. Onur, I.G.M.M. Niemegeers, Fairness in wireless networks:issues, measures and challenges. IEEE Commun. Surv. Tutor. 16(1), 5–24 (First Quarter 2014)
H.B. McMahan, G. Andrew, U. Erlingsson, et al., A general approach to adding differential privacy to iterative training procedures (2018). Preprint. ar**v:1812.06210 (2018)
A. Bhowmick, J. Duchi, J. Freudiger, G. Kapoor, R. Rogers, Protection against reconstruction and its applications in private federated learning (2018). Preprint. ar**v:1812.00984
A. Cheu, A. Smith, J. Ullman, D. Zeber, M. Zhilyaev, Distributed differential privacy via shuffling (2018). Preprint. ar**v:1808.01394
E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, V. Shmatikov, How to backdoor federated learning (2018). Preprint. ar**v:1807.00459
Z. Sun, P. Kairouz, A.T. Suresh, H.B. McMahan, Can you really backdoor federated learning? (2019). Preprint. ar**v:1911.07963
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Wang, X., Han, Y., Leung, V.C.M., Niyato, D., Yan, X., Chen, X. (2020). Artificial Intelligence Training at Edge. In: Edge AI. Springer, Singapore. https://doi.org/10.1007/978-981-15-6186-3_6
Download citation
DOI: https://doi.org/10.1007/978-981-15-6186-3_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-6185-6
Online ISBN: 978-981-15-6186-3
eBook Packages: Computer ScienceComputer Science (R0)