Fundamentals of Artificial Intelligence

  • Chapter
  • First Online:
Edge AI

Abstract

AI is a broad field of research that includes many methods of research value. However, due to the characteristics of edge computing in its operating structure and computing resources, deep learning has become the most closely related and representative method in AI for edge computing. In addition, due to the limitations of resources in edge computing, there is a lack of targeted solutions for resource-intensive deep learning. Therefore, in this book we focus on deep learning that require high computing resources. With respect to CV, NLP, and AI, DL is adopted in a myriad of applications and corroborates its superior performance by LeCun et al. (Nature 521(7553):436–444, 2015). Currently, a large number of GPUs, TPUs, or FPGAs are required to be deployed in the cloud to process DL service requests.Through the introduction of the previous two chapters, this book has made an in-depth analysis of the development bottlenecks of the current cloud computing model. The reader can understand that the current response time requirements of some deep learning applications are extremely demanding, and cloud computing can no longer meet the needs. Therefore, it is necessary to consider transferring the task of deep learning to the edge computing framework. Nonetheless, the edge computing architecture, on account of it covers a large number of distributed edge devices, can be utilized to better serve DL. Certainly, edge devices typically have limited computing power or power consumption compared to the cloud. Therefore, the combination of DL and edge computing is not straightforward and requires a comprehensive understanding of DL models and edge computing features for design and deployment. In this section, we compendiously introduce DL and related technical terms, paving the way for discussing the integration of DL and edge computing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. H. Li, K. Ota, M. Dong, Learning IoT in edge: deep learning for the Internet of Things with edge computing. IEEE Netw. 32(1), 96–101 (2018)

    Article  Google Scholar 

  2. S.S. Haykin, K. Elektroingenieur, Neural Networks and Learning Machines (Pearson Prentice Hall, Englewood Cliffs, 2009)

    Google Scholar 

  3. R. Collobert, S. Bengio, Links between perceptrons, MLPs and SVMs, in Proceeding of the Twenty-first International Conference on Machine Learning (ICML 2004) (2004), p. 23

    Google Scholar 

  4. C.D. Manning, C.D. Manning, H. Schütze, Foundations of Statistical Natural Language Processing (MIT Press, New York, 1999)

    MATH  Google Scholar 

  5. M.D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in 2014 European Conference on Computer Vision (ECCV 2014) (2014), pp. 818–833

    Google Scholar 

  6. I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., Generative adversarial nets, in Advances in Neural Information Processing Systems 27 (NeurIPS 2014) (2014), pp. 2672–2680

    Google Scholar 

  7. J. Schmidhuber, Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  8. S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  9. S.J. Pan, Q. Yang, A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  10. G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network (2015). ar**v preprint:1503.02531

    Google Scholar 

  11. S.S. Mousavi, M. Schukat, E. Howley, Deep reinforcement learning: an overview, in Proceeding of the 2016 SAI Intelligent Systems Conference (IntelliSys 2016) (2016), pp. 426–440

    Google Scholar 

  12. V. Mnih, K. Kavukcuoglu, D. Silver, et al., Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  13. H. Van Hasselt, A. Guez, D. Silver, Deep reinforcement learning with double Q-learning, in Proceeding of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI 2016) (2016), pp. 2094–2100

    Google Scholar 

  14. Z. Wang, T. Schaul, M. Hessel, et al., Dueling network architectures for deep reinforcement learning, in Proceeding of the 33rd International Conference on Machine Learning (ICML 2016) (2016), pp. 1995–2003

    Google Scholar 

  15. T.P. Lillicrap, J.J. Hunt, A. Pritzel, et al., Continuous control with deep reinforcement learning, in Proceeding of the 6th International Conference on Learning Representations (ICLR 2016) (2016)

    Google Scholar 

  16. V. Mnih, A.P. Badia, M. Mirza, et al., Asynchronous methods for deep reinforcement learning, in Proceeding of the 33rd International Conference on Machine Learning (ICML 2016) (2016), pp. 1928–1937

    Google Scholar 

  17. J. Schulman, F. Wolski, P. Dhariwal, et al., Proximal policy optimization algorithms (2017). ar**v preprint:1707.06347

    Google Scholar 

  18. R.S. Sutton, D. McAllester, S. Singh, Y. Mansour, Policy gradient methods for reinforcement learning with function approximation, in Proceeding of the 12th International Conference on Neural Information Processing Systems (NeurIPS 1999) (1999), pp. 1057–1063

    Google Scholar 

  19. A.S. Monin, A.M. Yaglom, Large scale distributed deep networks, in Proceeding of Advances in Neural Information Processing Systems 25 (NeurIPS 2012) (2012), pp. 1223–1231

    Google Scholar 

  20. Y. Zou, X. **, Y. Li, et al., Mariana: Tencent deep learning platform and its applications. Proc. VLDB Endow.7(13), 1772–1777 (2014)

    Article  Google Scholar 

  21. X. Chen, A. Eversole, G. Li, et al., Pipelined back-propagation for context-dependent deep neural networks, in Proceeding of 13th Annual Conference of the International Speech Communication Association (INTERSPEECH 2012) (2012), pp. 26–29

    Google Scholar 

  22. M. Stevenson, R. Winter, et al., 1-Bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs, in 15th Annual Conference of the International Speech Communication Association (INTERSPEECH 2014) (2014), pp. 1058–1062

    Google Scholar 

  23. A. Coates, B. Huval, T. Wang, et al., Deep learning with cots HPC systems, in Proceeding of the 30th International Conference on Machine Learning (PMLR 2013) (2013), pp. 1337–1345

    Google Scholar 

  24. P. Moritz, R. Nishihara, I. Stoica, and M. I. Jordan, SparkNet: Training Deep Networks in Spark. ar**v preprint:1511.06051, 2015.

    Google Scholar 

  25. Theano is a Python Library that Allows you to Define, Optimize, and Evaluate Mathematical Expressions Involving Multi-Dimensional Arrays Efficiently. https://github.com/Theano/Theano

  26. M. Abadi, P. Barham, et al., TensorFlow: a system for large-scale machine learning, in Proceeding of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI 2016) (2016), pp. 265–283

    Google Scholar 

  27. Y. Jia, E. Shelhamer, et al., Caffe: convolutional architecture for fast feature embedding, in Proceedings of the 22nd ACM international conference on Multimedia (2014), pp. 675–678

    Google Scholar 

  28. Géron, Aurélien, Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems (O’Reilly Media, Sebastopol, 2019)

    Google Scholar 

  29. A. Paszke, S. Gross, et al., PyTorch: an imperative style, high-performance deep learning library, in Advances in Neural Information Processing Systems (2019), pp. 8024–8035

    Google Scholar 

  30. Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wang, X., Han, Y., Leung, V.C.M., Niyato, D., Yan, X., Chen, X. (2020). Fundamentals of Artificial Intelligence. In: Edge AI. Springer, Singapore. https://doi.org/10.1007/978-981-15-6186-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-6186-3_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-6185-6

  • Online ISBN: 978-981-15-6186-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation