Abstract
Many fields of research use parallelized and distributed computing environments, including astronomy, earth science, and bioinformatics. Due to an increase in client requests, service providers face various challenges, such as task scheduling, security, resource management, and virtual machine migration. NP-hard scheduling problems require a long time to implement an optimal or suboptimal solution due to their large solution space. With recent advances in artificial intelligence, deep reinforcement learning (DRL) can be used to solve scheduling problems. The DRL approach combines the strength of deep learning and neural networks with reinforcement learning’s feedback-based learning. This paper provides a comprehensive overview of DRL-based scheduling algorithms in distributed systems by categorizing algorithms and applications. As a result, several articles are assessed based on their main objectives, quality of service and scheduling parameters, as well as evaluation environments (i.e., simulation tools, real-world environment). The literature review indicates that algorithms based on RL, such as Q-learning, are effective for learning scaling and scheduling policies in a cloud environment. Additionally, the challenges and directions for further research on deep reinforcement learning to address scheduling problems were summarized (e.g., edge intelligence, ideal dynamic task scheduling framework, human–machine interaction, resource-hungry artificial intelligence (AI) and sustainability).
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig1_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig2_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig3_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig4_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig5_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig6_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig7_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig8_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig9_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig10_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig11_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig12_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig13_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig14_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig15_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig16_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig17_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig18_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig19_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig20_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig21_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig22_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig23_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig24_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig25_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig26_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig27_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10115-024-02167-7/MediaObjects/10115_2024_2167_Fig28_HTML.png)
Similar content being viewed by others
Availability of data and materials
Not applicable.
References
Henderson P, Islam R, Bachman P, Pineau J, Precup D, Meger D (2018) Deep reinforcement learning that matters. In: 32nd AAAI conference on artificial intelligence
Zhao H, Dong C, Cao J, Chen Q (2024) A survey on deep reinforcement learning approaches for traffic signal control. Eng Appl Artif Intell 133:108100
Khurana D, Koli A, Khatter K, Singh S (2023) Natural language processing: state of the art, current trends and challenges. Multimed Tools Appl 82:3713–3744
**ang X, Foo S (2021) Recent advances in deep reinforcement learning applications for solving partially observable Markov Decision Processes (POMDP) problems: Part 1—fundamentals and applications in games, robotics and natural language processing. Machin Learn Knowl Extr 3:554–581
Kaushik P, Sharma AR (2017) Literature survey of statistical, deep and reinforcement learning in natural language processing. In: International conference on computing, communication and automation (ICCCA)
Capra L, Köhler-Bußmeier M (2024) Modular rewritable petri nets: an efficient model for dynamic distributed systems. Theoret Comput Sci 990:114397
Pirani M, Mitra A, Sundaram S (2023) Graph-theoretic approaches for analyzing the resilience of distributed control systems: a tutorial and survey. Automatica 157:111264
Ahmad S, Shakeel I, Mehfuz S, Ahmad J (2023) Deep learning models for cloud, edge, fog, and IoT computing paradigms: survey, recent advances, and future directions. Comput Sci Rev 49:100568
Yin X, Chen G, Qian H, Li L, Su S, Jiang H, Zhang H (2024) A distributed parallel computing-based CFD analysis of plate-type fuel assemblies in CARR reactors. Int Commun Heat Mass Transf 153:107335
Qazi F, Kwak D, Gul Khan F, Ali F, Ullah Khan S (2024) service level agreement in cloud computing: taxonomy, prospects, and challenges. Internet Things 25:101126
Ullah A, Nawi NM, Ouhame S (2022) Recent advancement in VM task allocation system for cloud computing: review from 2015 to 2021. Artif Intell Rev 55:2529–2573
Mohammad Hasani Zade B, Mansouri N (2022) Improved red fox optimizer with fuzzy theory and game theory for task scheduling in cloud environment. J Comput Sci 63:101805
Mohammad Hasani Zade B, Mansouri N, Javidi MM (2022) A two-stage scheduler based on new caledonian crow learning algorithm and reinforcement learning strategy for cloud environment. J Netw Comput Appl 202:103385
Menaka M, Sendhil Kumar KS (2022) Workflow scheduling in cloud environment-challenges, tools, limitations & methodologies: a review. Meas Sens 24:100436
Mohammad Hasani Zade B, Mansouri N, Javidi MM (2021) Multi-objective scheduling technique based on hybrid hitchcock bird algorithm and fuzzy signature in cloud computing. Eng Appl Artif Intell 104:104372
Mohammad Hasani Zade B, Mansouri N, Javidi MM (2021) SAEA: a security-aware and energy-aware task scheduling strategy by parallel squirrel search algorithm in cloud environment. Expert Syst Appl 176:114915
Cao K, Liu Y, Meng G, Sun Q (2020) An overview on edge computing research. IEEE Access 8:85715–85728
Satyanarayanan M (2017) The emergence of edge computing. Computer 50:30–39
Shi W, Cao J, Zhang Q, Li Y, Xu L (2016) Edge computing: vision and challenges. IEEE Internet Things J 3:637–646
Jalali Khalil Abadi Z, Mansouri N, Khalouie M (2023) Task scheduling in fog environment—challenges, tools & methodologies: a review. Comput Sci Rev 48:100550
Ul Islam MS, Kumar A, Hu YC (2021) Context-aware scheduling in fog computing: a survey, taxonomy, challenges and future directions. J Netw Comput Appl 180:103008
Mahesh B (2020) Machine learning algorithms—a review. Int J Sci Res 9:381–386
Aqib M, Kumar D, Tripathi S (2022) Machine learning for fog computing: Review, opportunities and a fog application classifier and scheduler. Wirel Pers Commun 129:853–880
Kober J, Bagnell JA, Peters J (2013) Reinforcement learning in robotics: a survey. Int J Robot Res 32:1238–1274
Iftikhar S, Gill SS, Song C, Xu M, Aslanpour MS, Toosi AN, Du J, Wu H, Ghosh S, Abdelmoniem AM, Cuadrado F, Varghese B, Rana O, Dustdar S, Uhlig S (2023) AI-based fog and edge computing: a systematic review, taxonomy and future directions. Internet Things 21:100674
Lin B (2024) Reinforcement learning and bandits for speech and language processing: tutorial, review and outlook. Expert Syst Appl 238:122254
Chen X, Zhang H, Wu C, Mao S, Ji Y, Bennis M (2018) Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning. IEEE Internet Things J 6:4005–4018
Vemireddy S, Rout RR (2021) Fuzzy reinforcement learning for energy efficient task offloading in vehicular fog computing. Comput Netw 199:108463
Mousavi SS, Schukat M, Howley E (2018) Deep reinforcement learning: an overview. In: Proceedings of SAI intelligent systems conference (IntelliSys), 16
Mattner J, Lange S, Riedmiller M (2012) Learn to swing up and balance a real pole based on raw visual input data. In: International conference on neural information processing, pp 126–133
Levine S, Finn C, Darrell T, Abbeel P (2016) End-to-end training of deep visuomotor policies. J Mach Learn Res 17:1334–1373
Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518:529–533
Tuli S, Casale G, Jennings NR (2021) MCDS: AI augmented workflow scheduling in mobile edge cloud computing systems. IEEE Trans Parallel Distrib Syst 33:2794–2807
Mnih V, Badia A P, Mirza M, Graves A, Lillicrap TP, Harley T, Silver D, Kavukcuoglu K (2016) Asynchronous methods for deep reinforcement learning. In: 33rd international conference on machine learning, vol 48, pp 1928–1937
Zolfpour-Arokhlo M, Selamat A, Hashim SZM, Afkhami H (2014) Modeling of route planning system based on Q value-based dynamic programming with multi-agent reinforcement learning algorithms. Eng Appl Artif Intell 29:163–177
Shi J, Du J, Wang J, Wang J, Yuan J (2020) Priority-aware task offloading in vehicular fog computing based on deep reinforcement learning. IEEE Trans Veh Technol 69:16067–16081
Chen M, **ao Y, Li Q, Chen KC (2020) Minimizing age-of-information for fog computing-supported vehicular networks with deep Q-learning. In: IEEE international conference on communications (ICC), pp 1–6
Gazori P, Rahbari D, Nickray M (2020) Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach. Future Gener Comput Syst 110:1098–1115
Jamil B, Ijaz H, Shojafar M, Munir K, Buyya R (2022) Resource allocation and task scheduling in fog computing and internet of everything environments: a taxonomy, review, and future directions. ACM Comput Surv 54:1–38
Ghafari R, Hassani Kabutarkhani F, Mansouri N (2022) Task scheduling algorithms for energy optimization in cloud environment: a comprehensive review. Clust Comput 25:1035–1093
Wang M, Li Y, Zhang L, Pei F (2021) Research on intelligent workshop resource scheduling method based on improved NSGA-II algorithm. Robot Comput-Integr Manuf 71:102–141
Murad SA, Muzahid AJMd, Azmi ZRM, Hoque MdI, Kowsher Md (2022) A review on job scheduling technique in cloud computing and priority rule based intelligent framework. J King Saud Univ Comput Inf Sci 34:2309–2331
Dong T, Xue F, **ao CH, Zhang J (2021) Workflow scheduling based on deep reinforcement learning in the cloud environment. J Ambient Intell Humaniz Comput 12:10823–10835
Xu R, Wang Y, Luo H, Wang F, **e Y, Liu X, Yang Y (2018) A sufficient and necessary temporal violation handling point selection strategy in cloud workflow. Future Gener Comput Syst Int J ESci 86:464–479
Jalali Khalil Abadi Z, Mansouri N (2024) A comprehensive survey on scheduling algorithms using fuzzy systems in distributed environments. Artif Intell Rev 57(4)
Barut C, Yildirim G, Tatar Y (2024) An intelligent and interpretable rule-based metaheuristic approach to task scheduling in cloud systems. Knowl-Based Syst 284:111241
Zhang J, Guo B, Ding X, Hu D, Tang J, Du K, Tang C, Jiang Y (2024) An adaptive multi-objective multi-task scheduling method by hierarchical deep reinforcement learning. Appl Soft Comput 154:111342
Thilak KD, Devi KL, Shanmuganathan C, Kalaiselvi K (2024) Meta-heuristic algorithms to optimize two-stage task scheduling in the cloud. SN Comput Sci 5:122
Pradeep K, Gobalakrishnan N, Manikandan N, Javid Ali L, Parkavi K, Vijayakumar KP (2021) A review on task scheduling using optimization algorithm in clouds. In: 5th international conference on trends in electronics and informatics (ICOEI), pp 935–938
Vijayalakshmi V, Saravanan M (2022) An extensive analysis of task scheduling algorithms based on fog computing QoS metrics. In: International conference on innovative computing, intelligent communication and smart electrical systems (ICSES), pp 1–8
Walia NK, Kaur N (2021) Performance analysis of the task scheduling algorithms in the cloud computing environments. In: 2nd international conference on intelligent engineering and management (ICIEM), pp 108–113
Shyalika C, Silva T, Karunananda A (2020) Reinforcement learning in dynamic task scheduling: a review. SN Comput Sci 1:306
Wang H, Wang H (2022) Survey on task scheduling in cloud computing environment. artificial intelligent, Robot Hum-Comput Interact 286–291
Ciptaningtyas HT, Shiddiqi AM, Purwitasari D (2022) Survey on task scheduling methods in cloud RPS system. Int Semin Intell Technol Its Appl 151–156
George N, Nandhakumar KG, Vijayan VP (2021) Survival study on resource utilization and task scheduling in cloud. In: Second international conference on electronics and sustainable communication systems (ICESC), pp 1814–1819
Pol SS, Singh A (2021) Task scheduling algorithms in cloud computing: a survey. In: Second international conference on secure cyber computing and communication (ICSCCC), pp 244–249
Houssein EH, Gad AG, Wazery YM, Suganthan PN (2021) Task scheduling in cloud computing based on meta-heuristics: review, taxonomy, open challenges, and future trends. Swarm Evol Comput 62:100841
Archana R, Kumar PM (2022) Utilization of fog computing in task scheduling and offloading: modern growth and future challenges. In: International conference on electronic systems and intelligent computing (ICESIC), pp 23–28
Houa H, Jawaddia SNA, Ismail A (2024) Energy efficient task scheduling based on deep reinforcement learning in cloud environment: a specialized review. Future Gener Comput Syst 151:214–231
Sahni Y, Cao J, Yang L, Wang A (2022) Distributed resource scheduling in edge computing: problems, solutions, and opportunities. Comput Netw 219:109430
Wang Y, Yu J, Yu Z (2023) Resource scheduling techniques in cloud from a view of coordination: a holistic survey. Front Inf Technol Electron Eng 24:1–40
Lou Q, Hu S, Li C, Li G, Shi W (2021) Resource scheduling in edge computing: a survey. IEEE Commun Surv Tutor 23:2131–2165
Aron R, Abraham A (2022) Resource scheduling methods for cloud computing environment: the role of meta-heuristics and artificial intelligence. Eng Appl Artif Intell 116:105345
Rashidifar R, Bouzary H, Chen FF (2022) Resource scheduling in cloud based manufacturing system: a comprehensive survey. Int J Adv Manuf Technol 122:4201–4219
Rahimikhanghah A, Tajkey M, Rezazadeh B, Rahmani AM (2022) Resource scheduling methods in cloud and fog computing environments: a systematic literature review. Clust Comput 25:911–945
Himanshu, Mangla N (2021) Soft security resource scheduling issues in cloud computing: A review. In: 6th international conference on signal processing, computing and control (ISPCC), pp 678–684
Reddy KLR, Lathigara A, Aluvalu R (2021) Survey on load balancing techniques and resource scheduling in cloud computing. In: 4th smart cities symposium (SCS)
Rupali, Mangla N (2021) A critical review of workflow scheduling algorithms in cloud computing environment. In: Fourth international conference on computational intelligence and communication technologies (CCICT), pp 355–361
Kumar Y, Kaul S, Hu YC (2022) Machine learning for energy-resource allocation, workflow scheduling and live migration in cloud computing: State-of-the-art survey. Sustain Comput Inform Syst 36:100780
Singh G, Chaturvedi AK (2021) Particle swarm optimization-based approaches for cloud-based task and workflow scheduling: a systematic literature review. In: Second international conference on secure cyber computing and communication (ICSCCC), pp 350–358
Menaka M, Kumar KSS (2022) Workflow scheduling in cloud environment—Challenges, tools, limitations & methodologies: a review. Meas Sens 24:100436
Cao Z, Zhang H, Cao Y, Liu B (2019) A deep reinforcement learning approach to multi-component job scheduling in edge computing. In: 15th international conference on mobile ad-hoc and sensor networks (MSN), pp 19–24
Lu T, Zeng F, Shen J, Chen G, Shu W, Zhang W (2021) A scheduling scheme in a container-based edge computing environment using deep reinforcement learning approach. In: 17th international conference on mobility, sensing and networking (MSN), pp 56–65
Ju X, Su S, Xu C, Wang H (2023) Computation offloading and tasks scheduling for the internet of vehicles in edge computing: a deep reinforcement learning-based pointer network approach. Comput Netw 223:109572
Qi F, Zhou L, **n C (2020) Deep reinforcement learning based task scheduling in edge computing networks. In: IEEE/CIC international conference on communications in China (ICCC), pp 835–840
Jayanetti A, Halgamuge S, Buyya R (2022) Deep reinforcement learning for energy and time optimized scheduling of precedence-constrained tasks in edge–cloud computing environments. Future Gener Comput Syst 137:14–30
Zhao Y, Li B, Wang J, Jiang D, Li D (2022) Integrating deep reinforcement learning with pointer networks for service request scheduling in edge computing. Knowl-Based Syst 258:109983
Zhang Y, Li R, Zhao Y, Li R, Wang Y, Zhou Z (2023) Multi-agent deep reinforcement learning for online request scheduling in edge cooperation networks. Future Gener Comput Syst 141:258–268
Meng H, Chao D, Huo R, Guo Q, Li X, Huang T (2019) Deep reinforcement learning based delay-sensitive task scheduling and resource management algorithm for multi-user mobile-edge computing systems. In: 4th international conference on mathematics and artificial intelligence, pp 66–70
Huang Y, Cheng L, Xue L, Liu C, Li Y, Li J, Ward T (2022) Deep adversarial imitation reinforcement learning for QoS-aware cloud job scheduling. IEEE Syst J 16:4232–4242
Yang Y, Shen H (2022) Deep reinforcement learning enhanced greedy optimization for online scheduling of batched tasks in cloud HPC systems. IEEE Trans Parallel Distrib Syst 33:3003–3014
Cheng M, Li J, Nazarian S (2018) DRL-cloud: deep reinforcement learning-based resource provisioning and task scheduling for cloud service providers. In: 23rd Asia and South Pacific design automation conference (ASP-DAC)
Yan J, Huang Y, Gupta A, Gupta A, Liu C, Li J, Cheng L (2022) Energy-aware systems for real-time job scheduling in cloud data centers: a deep reinforcement learning approach. Comput Electr Eng 99:107688
Tawfiqul Islam M, Karunasekera S, Buyya R (2022) Performance and cost-efficient spark job scheduling based on deep reinforcement learning in cloud computing environments. IEEE Trans Parallel Distrib Syst 33:1695–1710
Cheng L, Kalapgar A, Jain A, Wang Y, Qin Y, Li Y, Liu C (2022) Cost-aware real-time job scheduling for hybrid cloud using deep reinforcement learning. Neural Comput Appl 34:18579–18593
Cheng F, Huang Y, Tanpure B, Sawalani P, Cheng L, Liu C (2022) Cost-aware job scheduling for cloud instances using deep reinforcement learning. Clust Comput 25:619–631
Siddesha K, Jayaramaiah GV, Singh C (2022) A novel deep reinforcement learning scheme for task scheduling in cloud computing. Clust Comput 25:4171–4188
Ran L, Shi X, Shang M (2019) SLAs-aware online task scheduling based on deep reinforcement learning method in cloud environment. In: IEEE 21st international conference on high performance computing and communications, pp 1518–1525
Wang X, Zhang L, Liu Y, Zhao C, Wang K (2022) Solving task scheduling problems in cloud manufacturing via attention mechanism and deep reinforcement learning. J Manuf Syst 65:452–468
Swarup S, Shakshuki EM, Yasar A (2021) Task scheduling in cloud using deep reinforcement learning. Procedia Comput Sci 184:42–51
Rjoub G, Bentahar J, Abdel Wahab O, Bataineh AS (2020) Deep and reinforcement learning for automated task scheduling in large-scale cloud computing systems. Concurr Comput Pract Exp 33:1–14
Dong T, Xue F, **ao C, Li J (2019) Task scheduling based on deep reinforcement learning in a cloud manufacturing environment. Concurr Comput Pract Exp 32:1–12
Mangalampalli S, Karri GR, Kumar M, Khalaf OI, Romero CAT, Abdul Sahib G (2024) DRLBTSA: deep reinforcement learning based task scheduling algorithm in cloud computing. Multimed Tools Appl 83:8359–8387
Swarup S, Shakshuki EM, Yasar A (2021) Energy efficient task scheduling in fog environment using deep reinforcement learning approach. Procedia Comput Sci 191:65–75
Beak J, Kaddoum G (2022) Online partial offloading and task scheduling in SDN-fog networks with deep recurrent reinforcement learning. IEEE Internet Things J 9:11578–11589
Ali Ibrahim M, Askar S (2023) An intelligent scheduling strategy in fog computing system based on multi-objective deep reinforcement learning algorithm. IEEE Access 11:133607–133622
Wang Z, Goudarzi M, Gong M, Buyya R (2024) Deep reinforcement learning-based scheduling for optimizing system load and response time in edge and fog computing environments. Future Gener Comput Syst 152:55–69
Xue F, Hai Q, Dong T, Cui Z, Gong Y (2022) A deep reinforcement learning based hybrid algorithm for efficient resource scheduling in edge computing environment. Inf Sci 608:362–374
Zhu H, Li M, Tang Y, Sun Y (2020) A deep-reinforcement-learning-based optimization approach for real-time scheduling in cloud manufacturing. IEEE Access 8:9987–9997
Zhou G, Wen R, Tian W, Buyya R (2022) Deep reinforcement learning-based algorithms selectors for the resource scheduling in hierarchical cloud computing. J Netw Comput Appl 208:103520
Karthik P, Sekhar K (2021) Resource scheduling approach in cloud testing as a service using deep reinforcement learning algorithms. CAAI Trans Intell Technol 147–154
Uma J, Vivekanandan P, Shankar S (2023) Optimized intellectual resource scheduling using deep reinforcement Q-learning in cloud computing. Trans Emerg Telecommun Technol 33:1–19
Long T, **a Y, Ma Y, Peng Q, Zhao J (2022) A fault-tolerant workflow scheduling method on deep reinforcement learning-based in edge environment. In: IEEE international conference on networking, sensing and control (ICNSC)
Zheng T, Wan J, Zhang J, Jiang C (2022) Deep reinforcement learning-based workload scheduling for edge computing. J Cloud Comput Adv Syst Appl 11:1–13
Chen G, Qi J, Sun Y, Hu X, Dong Z, Sun Y (2023) A collaborative scheduling method for cloud computing heterogeneous workflows based on deep reinforcement learning. Future Gener Comput Syst 141:284–297
Dong T, Xue F, **ao C (2021) Deep reinforcement learning for dynamic workflow scheduling in cloud environment. In: IEEE international conference on services computing (SCC), pp 107–115
Dong T, Xue F, Tang H, **ao C (2022) Deep reinforcement learning for fault-tolerant workflow scheduling in cloud environment. Appl Intell 53:9916–9932
Li H, Huang J, Wang B, Fan Y (2022) Weighted double deep Q-network based reinforcement learning for bi-objective multi-workflow scheduling in the cloud. Clust Comput 25:751–768
Mangalampalli S, Hashemi SS, Gupta A, Rajkumar KV, Chakrabarti T, Chakrabarti P, Margala M (2024) Multi objective prioritized workflow scheduling using deep reinforcement based learning in cloud computing. IEEE Access 12:5373–5392
Zhang J, Cheng L, Liu C, Zhao Z, Mao Y (2023) Cost-aware scheduling systems for real-time workflows in cloud: an approach based on genetic algorithm and deep reinforcement learning. Expert Syst Appl 234:120972
Kumar R, Sahoo G (2014) Cloud computing simulation using CloudSim. Int J Eng Trends Technol 8:82–86
Chen W, Deelman E (2021) WorkflowSim: a toolkit for simulating scientific workflows in distributed environments. In: IEEE 8th international conference on E-science
da Silva RF, Chen W, Juve G, Vahi K, Deelman E (2014) Community resources for enabling research in distributed scientific workflows. In: IEEE 10th international conference on e-science
Silva Filho MC, Oliveira RL, Monteiro CC, Inácio PRM, Freire MM (2017) CloudSim Plus: a cloud computing simulation framework pursuing software engineering principles for improved modularity, extensibility and correctness. In: IFIP/IEEE symposium on integrated network and service management (IM)
Buyya R, Srirama SN (2019) Modelling and simulation of fog and edge computing environments using iFogSim toolkit. Wiley Telecom 433–465
Abreu DP, Velasquez K, Curado M, Monteiro E (2020) A comparative analysis of simulators for the cloud to Fog continuum. Simul Model Pract Theory 101:102029
Mahmud R, Pallewatta S, Goudarzi M, Buyya R (2022) iFogSim2: an extended iFogSim simulator for mobility, clustering, and microservice management in edge and fog computing environments. J Syst Softw 109:111351
Levine S, Kumar A, Tucker G, Fu J (2020) Offline reinforcement learning: tutorial, review, and perspectives on open problems. Computer Science, Education
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
Zahra Jalali Khalil Abadi designed the study, gathered the data for the article, and participated in writing—original draft preparation. Najme Mansouri contributed to the investigation, interpretation of the results, and writing—original draft preparation. Mohammad Masoud Javidi was involved in the verification, writing—reviewing and editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Jalali Khalil Abadi, Z., Mansouri, N. & Javidi, M.M. Deep reinforcement learning-based scheduling in distributed systems: a critical review. Knowl Inf Syst (2024). https://doi.org/10.1007/s10115-024-02167-7
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10115-024-02167-7