Log in

Deep reinforcement learning-based scheduling in distributed systems: a critical review

  • Review
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Many fields of research use parallelized and distributed computing environments, including astronomy, earth science, and bioinformatics. Due to an increase in client requests, service providers face various challenges, such as task scheduling, security, resource management, and virtual machine migration. NP-hard scheduling problems require a long time to implement an optimal or suboptimal solution due to their large solution space. With recent advances in artificial intelligence, deep reinforcement learning (DRL) can be used to solve scheduling problems. The DRL approach combines the strength of deep learning and neural networks with reinforcement learning’s feedback-based learning. This paper provides a comprehensive overview of DRL-based scheduling algorithms in distributed systems by categorizing algorithms and applications. As a result, several articles are assessed based on their main objectives, quality of service and scheduling parameters, as well as evaluation environments (i.e., simulation tools, real-world environment). The literature review indicates that algorithms based on RL, such as Q-learning, are effective for learning scaling and scheduling policies in a cloud environment. Additionally, the challenges and directions for further research on deep reinforcement learning to address scheduling problems were summarized (e.g., edge intelligence, ideal dynamic task scheduling framework, human–machine interaction, resource-hungry artificial intelligence (AI) and sustainability).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28

Similar content being viewed by others

Availability of data and materials

Not applicable.

References

  1. Henderson P, Islam R, Bachman P, Pineau J, Precup D, Meger D (2018) Deep reinforcement learning that matters. In: 32nd AAAI conference on artificial intelligence

  2. Zhao H, Dong C, Cao J, Chen Q (2024) A survey on deep reinforcement learning approaches for traffic signal control. Eng Appl Artif Intell 133:108100

    Article  Google Scholar 

  3. Khurana D, Koli A, Khatter K, Singh S (2023) Natural language processing: state of the art, current trends and challenges. Multimed Tools Appl 82:3713–3744

    Article  Google Scholar 

  4. **ang X, Foo S (2021) Recent advances in deep reinforcement learning applications for solving partially observable Markov Decision Processes (POMDP) problems: Part 1—fundamentals and applications in games, robotics and natural language processing. Machin Learn Knowl Extr 3:554–581

    Article  Google Scholar 

  5. Kaushik P, Sharma AR (2017) Literature survey of statistical, deep and reinforcement learning in natural language processing. In: International conference on computing, communication and automation (ICCCA)

  6. Capra L, Köhler-Bußmeier M (2024) Modular rewritable petri nets: an efficient model for dynamic distributed systems. Theoret Comput Sci 990:114397

    Article  MathSciNet  Google Scholar 

  7. Pirani M, Mitra A, Sundaram S (2023) Graph-theoretic approaches for analyzing the resilience of distributed control systems: a tutorial and survey. Automatica 157:111264

    Article  MathSciNet  Google Scholar 

  8. Ahmad S, Shakeel I, Mehfuz S, Ahmad J (2023) Deep learning models for cloud, edge, fog, and IoT computing paradigms: survey, recent advances, and future directions. Comput Sci Rev 49:100568

    Article  MathSciNet  Google Scholar 

  9. Yin X, Chen G, Qian H, Li L, Su S, Jiang H, Zhang H (2024) A distributed parallel computing-based CFD analysis of plate-type fuel assemblies in CARR reactors. Int Commun Heat Mass Transf 153:107335

    Article  Google Scholar 

  10. Qazi F, Kwak D, Gul Khan F, Ali F, Ullah Khan S (2024) service level agreement in cloud computing: taxonomy, prospects, and challenges. Internet Things 25:101126

    Article  Google Scholar 

  11. Ullah A, Nawi NM, Ouhame S (2022) Recent advancement in VM task allocation system for cloud computing: review from 2015 to 2021. Artif Intell Rev 55:2529–2573

    Article  Google Scholar 

  12. Mohammad Hasani Zade B, Mansouri N (2022) Improved red fox optimizer with fuzzy theory and game theory for task scheduling in cloud environment. J Comput Sci 63:101805

    Article  Google Scholar 

  13. Mohammad Hasani Zade B, Mansouri N, Javidi MM (2022) A two-stage scheduler based on new caledonian crow learning algorithm and reinforcement learning strategy for cloud environment. J Netw Comput Appl 202:103385

    Article  Google Scholar 

  14. Menaka M, Sendhil Kumar KS (2022) Workflow scheduling in cloud environment-challenges, tools, limitations & methodologies: a review. Meas Sens 24:100436

    Article  Google Scholar 

  15. Mohammad Hasani Zade B, Mansouri N, Javidi MM (2021) Multi-objective scheduling technique based on hybrid hitchcock bird algorithm and fuzzy signature in cloud computing. Eng Appl Artif Intell 104:104372

    Article  Google Scholar 

  16. Mohammad Hasani Zade B, Mansouri N, Javidi MM (2021) SAEA: a security-aware and energy-aware task scheduling strategy by parallel squirrel search algorithm in cloud environment. Expert Syst Appl 176:114915

    Article  Google Scholar 

  17. Cao K, Liu Y, Meng G, Sun Q (2020) An overview on edge computing research. IEEE Access 8:85715–85728

    Google Scholar 

  18. Satyanarayanan M (2017) The emergence of edge computing. Computer 50:30–39

    Article  Google Scholar 

  19. Shi W, Cao J, Zhang Q, Li Y, Xu L (2016) Edge computing: vision and challenges. IEEE Internet Things J 3:637–646

    Article  Google Scholar 

  20. Jalali Khalil Abadi Z, Mansouri N, Khalouie M (2023) Task scheduling in fog environment—challenges, tools & methodologies: a review. Comput Sci Rev 48:100550

    Article  MathSciNet  Google Scholar 

  21. Ul Islam MS, Kumar A, Hu YC (2021) Context-aware scheduling in fog computing: a survey, taxonomy, challenges and future directions. J Netw Comput Appl 180:103008

    Article  Google Scholar 

  22. https://www.educba.com/fog-computing-architecture/

  23. Mahesh B (2020) Machine learning algorithms—a review. Int J Sci Res 9:381–386

    Google Scholar 

  24. Aqib M, Kumar D, Tripathi S (2022) Machine learning for fog computing: Review, opportunities and a fog application classifier and scheduler. Wirel Pers Commun 129:853–880

    Article  Google Scholar 

  25. Kober J, Bagnell JA, Peters J (2013) Reinforcement learning in robotics: a survey. Int J Robot Res 32:1238–1274

    Article  Google Scholar 

  26. Iftikhar S, Gill SS, Song C, Xu M, Aslanpour MS, Toosi AN, Du J, Wu H, Ghosh S, Abdelmoniem AM, Cuadrado F, Varghese B, Rana O, Dustdar S, Uhlig S (2023) AI-based fog and edge computing: a systematic review, taxonomy and future directions. Internet Things 21:100674

    Article  Google Scholar 

  27. Lin B (2024) Reinforcement learning and bandits for speech and language processing: tutorial, review and outlook. Expert Syst Appl 238:122254

    Article  Google Scholar 

  28. Chen X, Zhang H, Wu C, Mao S, Ji Y, Bennis M (2018) Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning. IEEE Internet Things J 6:4005–4018

    Article  Google Scholar 

  29. Vemireddy S, Rout RR (2021) Fuzzy reinforcement learning for energy efficient task offloading in vehicular fog computing. Comput Netw 199:108463

    Article  Google Scholar 

  30. Mousavi SS, Schukat M, Howley E (2018) Deep reinforcement learning: an overview. In: Proceedings of SAI intelligent systems conference (IntelliSys), 16

  31. Mattner J, Lange S, Riedmiller M (2012) Learn to swing up and balance a real pole based on raw visual input data. In: International conference on neural information processing, pp 126–133

  32. Levine S, Finn C, Darrell T, Abbeel P (2016) End-to-end training of deep visuomotor policies. J Mach Learn Res 17:1334–1373

    MathSciNet  Google Scholar 

  33. Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518:529–533

    Article  Google Scholar 

  34. Tuli S, Casale G, Jennings NR (2021) MCDS: AI augmented workflow scheduling in mobile edge cloud computing systems. IEEE Trans Parallel Distrib Syst 33:2794–2807

    Google Scholar 

  35. Mnih V, Badia A P, Mirza M, Graves A, Lillicrap TP, Harley T, Silver D, Kavukcuoglu K (2016) Asynchronous methods for deep reinforcement learning. In: 33rd international conference on machine learning, vol 48, pp 1928–1937

  36. Zolfpour-Arokhlo M, Selamat A, Hashim SZM, Afkhami H (2014) Modeling of route planning system based on Q value-based dynamic programming with multi-agent reinforcement learning algorithms. Eng Appl Artif Intell 29:163–177

    Article  Google Scholar 

  37. Shi J, Du J, Wang J, Wang J, Yuan J (2020) Priority-aware task offloading in vehicular fog computing based on deep reinforcement learning. IEEE Trans Veh Technol 69:16067–16081

    Article  Google Scholar 

  38. Chen M, **ao Y, Li Q, Chen KC (2020) Minimizing age-of-information for fog computing-supported vehicular networks with deep Q-learning. In: IEEE international conference on communications (ICC), pp 1–6

  39. Gazori P, Rahbari D, Nickray M (2020) Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach. Future Gener Comput Syst 110:1098–1115

    Article  Google Scholar 

  40. Jamil B, Ijaz H, Shojafar M, Munir K, Buyya R (2022) Resource allocation and task scheduling in fog computing and internet of everything environments: a taxonomy, review, and future directions. ACM Comput Surv 54:1–38

    Article  Google Scholar 

  41. Ghafari R, Hassani Kabutarkhani F, Mansouri N (2022) Task scheduling algorithms for energy optimization in cloud environment: a comprehensive review. Clust Comput 25:1035–1093

    Article  Google Scholar 

  42. Wang M, Li Y, Zhang L, Pei F (2021) Research on intelligent workshop resource scheduling method based on improved NSGA-II algorithm. Robot Comput-Integr Manuf 71:102–141

    Google Scholar 

  43. Murad SA, Muzahid AJMd, Azmi ZRM, Hoque MdI, Kowsher Md (2022) A review on job scheduling technique in cloud computing and priority rule based intelligent framework. J King Saud Univ Comput Inf Sci 34:2309–2331

    Google Scholar 

  44. Dong T, Xue F, **ao CH, Zhang J (2021) Workflow scheduling based on deep reinforcement learning in the cloud environment. J Ambient Intell Humaniz Comput 12:10823–10835

    Article  Google Scholar 

  45. Xu R, Wang Y, Luo H, Wang F, **e Y, Liu X, Yang Y (2018) A sufficient and necessary temporal violation handling point selection strategy in cloud workflow. Future Gener Comput Syst Int J ESci 86:464–479

    Article  Google Scholar 

  46. Jalali Khalil Abadi Z, Mansouri N (2024) A comprehensive survey on scheduling algorithms using fuzzy systems in distributed environments. Artif Intell Rev 57(4)

  47. Barut C, Yildirim G, Tatar Y (2024) An intelligent and interpretable rule-based metaheuristic approach to task scheduling in cloud systems. Knowl-Based Syst 284:111241

    Article  Google Scholar 

  48. Zhang J, Guo B, Ding X, Hu D, Tang J, Du K, Tang C, Jiang Y (2024) An adaptive multi-objective multi-task scheduling method by hierarchical deep reinforcement learning. Appl Soft Comput 154:111342

    Article  Google Scholar 

  49. Thilak KD, Devi KL, Shanmuganathan C, Kalaiselvi K (2024) Meta-heuristic algorithms to optimize two-stage task scheduling in the cloud. SN Comput Sci 5:122

    Article  Google Scholar 

  50. Pradeep K, Gobalakrishnan N, Manikandan N, Javid Ali L, Parkavi K, Vijayakumar KP (2021) A review on task scheduling using optimization algorithm in clouds. In: 5th international conference on trends in electronics and informatics (ICOEI), pp 935–938

  51. Vijayalakshmi V, Saravanan M (2022) An extensive analysis of task scheduling algorithms based on fog computing QoS metrics. In: International conference on innovative computing, intelligent communication and smart electrical systems (ICSES), pp 1–8

  52. Walia NK, Kaur N (2021) Performance analysis of the task scheduling algorithms in the cloud computing environments. In: 2nd international conference on intelligent engineering and management (ICIEM), pp 108–113

  53. Shyalika C, Silva T, Karunananda A (2020) Reinforcement learning in dynamic task scheduling: a review. SN Comput Sci 1:306

    Article  Google Scholar 

  54. Wang H, Wang H (2022) Survey on task scheduling in cloud computing environment. artificial intelligent, Robot Hum-Comput Interact 286–291

  55. Ciptaningtyas HT, Shiddiqi AM, Purwitasari D (2022) Survey on task scheduling methods in cloud RPS system. Int Semin Intell Technol Its Appl 151–156

  56. George N, Nandhakumar KG, Vijayan VP (2021) Survival study on resource utilization and task scheduling in cloud. In: Second international conference on electronics and sustainable communication systems (ICESC), pp 1814–1819

  57. Pol SS, Singh A (2021) Task scheduling algorithms in cloud computing: a survey. In: Second international conference on secure cyber computing and communication (ICSCCC), pp 244–249

  58. Houssein EH, Gad AG, Wazery YM, Suganthan PN (2021) Task scheduling in cloud computing based on meta-heuristics: review, taxonomy, open challenges, and future trends. Swarm Evol Comput 62:100841

    Article  Google Scholar 

  59. Archana R, Kumar PM (2022) Utilization of fog computing in task scheduling and offloading: modern growth and future challenges. In: International conference on electronic systems and intelligent computing (ICESIC), pp 23–28

  60. Houa H, Jawaddia SNA, Ismail A (2024) Energy efficient task scheduling based on deep reinforcement learning in cloud environment: a specialized review. Future Gener Comput Syst 151:214–231

    Article  Google Scholar 

  61. Sahni Y, Cao J, Yang L, Wang A (2022) Distributed resource scheduling in edge computing: problems, solutions, and opportunities. Comput Netw 219:109430

    Article  Google Scholar 

  62. Wang Y, Yu J, Yu Z (2023) Resource scheduling techniques in cloud from a view of coordination: a holistic survey. Front Inf Technol Electron Eng 24:1–40

    Article  Google Scholar 

  63. Lou Q, Hu S, Li C, Li G, Shi W (2021) Resource scheduling in edge computing: a survey. IEEE Commun Surv Tutor 23:2131–2165

    Article  Google Scholar 

  64. Aron R, Abraham A (2022) Resource scheduling methods for cloud computing environment: the role of meta-heuristics and artificial intelligence. Eng Appl Artif Intell 116:105345

    Article  Google Scholar 

  65. Rashidifar R, Bouzary H, Chen FF (2022) Resource scheduling in cloud based manufacturing system: a comprehensive survey. Int J Adv Manuf Technol 122:4201–4219

    Article  Google Scholar 

  66. Rahimikhanghah A, Tajkey M, Rezazadeh B, Rahmani AM (2022) Resource scheduling methods in cloud and fog computing environments: a systematic literature review. Clust Comput 25:911–945

    Article  Google Scholar 

  67. Himanshu, Mangla N (2021) Soft security resource scheduling issues in cloud computing: A review. In: 6th international conference on signal processing, computing and control (ISPCC), pp 678–684

  68. Reddy KLR, Lathigara A, Aluvalu R (2021) Survey on load balancing techniques and resource scheduling in cloud computing. In: 4th smart cities symposium (SCS)

  69. Rupali, Mangla N (2021) A critical review of workflow scheduling algorithms in cloud computing environment. In: Fourth international conference on computational intelligence and communication technologies (CCICT), pp 355–361

  70. Kumar Y, Kaul S, Hu YC (2022) Machine learning for energy-resource allocation, workflow scheduling and live migration in cloud computing: State-of-the-art survey. Sustain Comput Inform Syst 36:100780

    Google Scholar 

  71. Singh G, Chaturvedi AK (2021) Particle swarm optimization-based approaches for cloud-based task and workflow scheduling: a systematic literature review. In: Second international conference on secure cyber computing and communication (ICSCCC), pp 350–358

  72. Menaka M, Kumar KSS (2022) Workflow scheduling in cloud environment—Challenges, tools, limitations & methodologies: a review. Meas Sens 24:100436

    Article  Google Scholar 

  73. Cao Z, Zhang H, Cao Y, Liu B (2019) A deep reinforcement learning approach to multi-component job scheduling in edge computing. In: 15th international conference on mobile ad-hoc and sensor networks (MSN), pp 19–24

  74. Lu T, Zeng F, Shen J, Chen G, Shu W, Zhang W (2021) A scheduling scheme in a container-based edge computing environment using deep reinforcement learning approach. In: 17th international conference on mobility, sensing and networking (MSN), pp 56–65

  75. Ju X, Su S, Xu C, Wang H (2023) Computation offloading and tasks scheduling for the internet of vehicles in edge computing: a deep reinforcement learning-based pointer network approach. Comput Netw 223:109572

    Article  Google Scholar 

  76. Qi F, Zhou L, **n C (2020) Deep reinforcement learning based task scheduling in edge computing networks. In: IEEE/CIC international conference on communications in China (ICCC), pp 835–840

  77. Jayanetti A, Halgamuge S, Buyya R (2022) Deep reinforcement learning for energy and time optimized scheduling of precedence-constrained tasks in edge–cloud computing environments. Future Gener Comput Syst 137:14–30

    Article  Google Scholar 

  78. Zhao Y, Li B, Wang J, Jiang D, Li D (2022) Integrating deep reinforcement learning with pointer networks for service request scheduling in edge computing. Knowl-Based Syst 258:109983

    Article  Google Scholar 

  79. Zhang Y, Li R, Zhao Y, Li R, Wang Y, Zhou Z (2023) Multi-agent deep reinforcement learning for online request scheduling in edge cooperation networks. Future Gener Comput Syst 141:258–268

    Article  Google Scholar 

  80. Meng H, Chao D, Huo R, Guo Q, Li X, Huang T (2019) Deep reinforcement learning based delay-sensitive task scheduling and resource management algorithm for multi-user mobile-edge computing systems. In: 4th international conference on mathematics and artificial intelligence, pp 66–70

  81. Huang Y, Cheng L, Xue L, Liu C, Li Y, Li J, Ward T (2022) Deep adversarial imitation reinforcement learning for QoS-aware cloud job scheduling. IEEE Syst J 16:4232–4242

    Article  Google Scholar 

  82. Yang Y, Shen H (2022) Deep reinforcement learning enhanced greedy optimization for online scheduling of batched tasks in cloud HPC systems. IEEE Trans Parallel Distrib Syst 33:3003–3014

    Google Scholar 

  83. Cheng M, Li J, Nazarian S (2018) DRL-cloud: deep reinforcement learning-based resource provisioning and task scheduling for cloud service providers. In: 23rd Asia and South Pacific design automation conference (ASP-DAC)

  84. Yan J, Huang Y, Gupta A, Gupta A, Liu C, Li J, Cheng L (2022) Energy-aware systems for real-time job scheduling in cloud data centers: a deep reinforcement learning approach. Comput Electr Eng 99:107688

    Article  Google Scholar 

  85. Tawfiqul Islam M, Karunasekera S, Buyya R (2022) Performance and cost-efficient spark job scheduling based on deep reinforcement learning in cloud computing environments. IEEE Trans Parallel Distrib Syst 33:1695–1710

    Article  Google Scholar 

  86. Cheng L, Kalapgar A, Jain A, Wang Y, Qin Y, Li Y, Liu C (2022) Cost-aware real-time job scheduling for hybrid cloud using deep reinforcement learning. Neural Comput Appl 34:18579–18593

    Article  Google Scholar 

  87. Cheng F, Huang Y, Tanpure B, Sawalani P, Cheng L, Liu C (2022) Cost-aware job scheduling for cloud instances using deep reinforcement learning. Clust Comput 25:619–631

    Article  Google Scholar 

  88. Siddesha K, Jayaramaiah GV, Singh C (2022) A novel deep reinforcement learning scheme for task scheduling in cloud computing. Clust Comput 25:4171–4188

    Article  Google Scholar 

  89. Ran L, Shi X, Shang M (2019) SLAs-aware online task scheduling based on deep reinforcement learning method in cloud environment. In: IEEE 21st international conference on high performance computing and communications, pp 1518–1525

  90. Wang X, Zhang L, Liu Y, Zhao C, Wang K (2022) Solving task scheduling problems in cloud manufacturing via attention mechanism and deep reinforcement learning. J Manuf Syst 65:452–468

    Article  Google Scholar 

  91. Swarup S, Shakshuki EM, Yasar A (2021) Task scheduling in cloud using deep reinforcement learning. Procedia Comput Sci 184:42–51

    Article  Google Scholar 

  92. Rjoub G, Bentahar J, Abdel Wahab O, Bataineh AS (2020) Deep and reinforcement learning for automated task scheduling in large-scale cloud computing systems. Concurr Comput Pract Exp 33:1–14

    Google Scholar 

  93. Dong T, Xue F, **ao C, Li J (2019) Task scheduling based on deep reinforcement learning in a cloud manufacturing environment. Concurr Comput Pract Exp 32:1–12

    Google Scholar 

  94. Mangalampalli S, Karri GR, Kumar M, Khalaf OI, Romero CAT, Abdul Sahib G (2024) DRLBTSA: deep reinforcement learning based task scheduling algorithm in cloud computing. Multimed Tools Appl 83:8359–8387

    Article  Google Scholar 

  95. Swarup S, Shakshuki EM, Yasar A (2021) Energy efficient task scheduling in fog environment using deep reinforcement learning approach. Procedia Comput Sci 191:65–75

    Article  Google Scholar 

  96. Beak J, Kaddoum G (2022) Online partial offloading and task scheduling in SDN-fog networks with deep recurrent reinforcement learning. IEEE Internet Things J 9:11578–11589

    Article  Google Scholar 

  97. Ali Ibrahim M, Askar S (2023) An intelligent scheduling strategy in fog computing system based on multi-objective deep reinforcement learning algorithm. IEEE Access 11:133607–133622

    Article  Google Scholar 

  98. Wang Z, Goudarzi M, Gong M, Buyya R (2024) Deep reinforcement learning-based scheduling for optimizing system load and response time in edge and fog computing environments. Future Gener Comput Syst 152:55–69

    Article  Google Scholar 

  99. Xue F, Hai Q, Dong T, Cui Z, Gong Y (2022) A deep reinforcement learning based hybrid algorithm for efficient resource scheduling in edge computing environment. Inf Sci 608:362–374

    Article  Google Scholar 

  100. Zhu H, Li M, Tang Y, Sun Y (2020) A deep-reinforcement-learning-based optimization approach for real-time scheduling in cloud manufacturing. IEEE Access 8:9987–9997

    Article  Google Scholar 

  101. Zhou G, Wen R, Tian W, Buyya R (2022) Deep reinforcement learning-based algorithms selectors for the resource scheduling in hierarchical cloud computing. J Netw Comput Appl 208:103520

    Article  Google Scholar 

  102. Karthik P, Sekhar K (2021) Resource scheduling approach in cloud testing as a service using deep reinforcement learning algorithms. CAAI Trans Intell Technol 147–154

  103. Uma J, Vivekanandan P, Shankar S (2023) Optimized intellectual resource scheduling using deep reinforcement Q-learning in cloud computing. Trans Emerg Telecommun Technol 33:1–19

    Google Scholar 

  104. Long T, **a Y, Ma Y, Peng Q, Zhao J (2022) A fault-tolerant workflow scheduling method on deep reinforcement learning-based in edge environment. In: IEEE international conference on networking, sensing and control (ICNSC)

  105. Zheng T, Wan J, Zhang J, Jiang C (2022) Deep reinforcement learning-based workload scheduling for edge computing. J Cloud Comput Adv Syst Appl 11:1–13

    Article  Google Scholar 

  106. Chen G, Qi J, Sun Y, Hu X, Dong Z, Sun Y (2023) A collaborative scheduling method for cloud computing heterogeneous workflows based on deep reinforcement learning. Future Gener Comput Syst 141:284–297

    Article  Google Scholar 

  107. Dong T, Xue F, **ao C (2021) Deep reinforcement learning for dynamic workflow scheduling in cloud environment. In: IEEE international conference on services computing (SCC), pp 107–115

  108. Dong T, Xue F, Tang H, **ao C (2022) Deep reinforcement learning for fault-tolerant workflow scheduling in cloud environment. Appl Intell 53:9916–9932

    Article  Google Scholar 

  109. Li H, Huang J, Wang B, Fan Y (2022) Weighted double deep Q-network based reinforcement learning for bi-objective multi-workflow scheduling in the cloud. Clust Comput 25:751–768

    Article  Google Scholar 

  110. Mangalampalli S, Hashemi SS, Gupta A, Rajkumar KV, Chakrabarti T, Chakrabarti P, Margala M (2024) Multi objective prioritized workflow scheduling using deep reinforcement based learning in cloud computing. IEEE Access 12:5373–5392

    Article  Google Scholar 

  111. Zhang J, Cheng L, Liu C, Zhao Z, Mao Y (2023) Cost-aware scheduling systems for real-time workflows in cloud: an approach based on genetic algorithm and deep reinforcement learning. Expert Syst Appl 234:120972

    Article  Google Scholar 

  112. https://pytorch.org/features/

  113. Kumar R, Sahoo G (2014) Cloud computing simulation using CloudSim. Int J Eng Trends Technol 8:82–86

    Article  Google Scholar 

  114. Chen W, Deelman E (2021) WorkflowSim: a toolkit for simulating scientific workflows in distributed environments. In: IEEE 8th international conference on E-science

  115. da Silva RF, Chen W, Juve G, Vahi K, Deelman E (2014) Community resources for enabling research in distributed scientific workflows. In: IEEE 10th international conference on e-science

  116. Silva Filho MC, Oliveira RL, Monteiro CC, Inácio PRM, Freire MM (2017) CloudSim Plus: a cloud computing simulation framework pursuing software engineering principles for improved modularity, extensibility and correctness. In: IFIP/IEEE symposium on integrated network and service management (IM)

  117. Buyya R, Srirama SN (2019) Modelling and simulation of fog and edge computing environments using iFogSim toolkit. Wiley Telecom 433–465

  118. Abreu DP, Velasquez K, Curado M, Monteiro E (2020) A comparative analysis of simulators for the cloud to Fog continuum. Simul Model Pract Theory 101:102029

    Article  Google Scholar 

  119. Mahmud R, Pallewatta S, Goudarzi M, Buyya R (2022) iFogSim2: an extended iFogSim simulator for mobility, clustering, and microservice management in edge and fog computing environments. J Syst Softw 109:111351

    Article  Google Scholar 

  120. Levine S, Kumar A, Tucker G, Fu J (2020) Offline reinforcement learning: tutorial, review, and perspectives on open problems. Computer Science, Education

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Zahra Jalali Khalil Abadi designed the study, gathered the data for the article, and participated in writing—original draft preparation. Najme Mansouri contributed to the investigation, interpretation of the results, and writing—original draft preparation. Mohammad Masoud Javidi was involved in the verification, writing—reviewing and editing.

Corresponding author

Correspondence to Najme Mansouri.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jalali Khalil Abadi, Z., Mansouri, N. & Javidi, M.M. Deep reinforcement learning-based scheduling in distributed systems: a critical review. Knowl Inf Syst (2024). https://doi.org/10.1007/s10115-024-02167-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10115-024-02167-7

Keywords

Navigation