How to cite this paper
Hu, K. (2025). Age of information-aware deep reinforcement learning for efficient cloud resource scheduling in dynamic environments.International Journal of Industrial Engineering Computations , 16(2), 247-260.
Refrences
Alla, S.B., Alla, H.B., Touhafi, A., & Ezzati, A. (2019). An Efficient Energy-Aware Tasks Scheduling with Deadline-Constrained in Cloud Computing. Computers, 8(2), 46.
Belgacem, A., Mahmoudi, S., & Kihl, M. (2022). Intelligent Multi-Agent Reinforcement Learning Model for Resources Allocation in Cloud Computing. Journal of King Saud University-Computer and Information Sciences, 34(6), 2391-2404.
Beloglazov, A., Abawajy, J., & Buyya, R. (2012). Energy-aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing. Future Generation Computer Systems, 28(5), 755-768.
Chang, T., Cao, X., & Zheng, W. (2024). A Lightweight Sensor Scheduler Based Oon AoI Function for Remote State Estimation over Lossy Wireless Channels. IEEE Transactions on Automatic Control, 69(3), 1697-1704.
Cheng, M., Li, J., & Nazarian, S. (2018). DRL-cloud: Deep Reinforcement Learning-Based Resource Provisioning and Task Scheduling for Cloud Service Providers. in 2018 23rd Asia and South pacific design automation conference (ASP-DAC), 129-134.
Costa, M., Codreanu, M., & Ephremides, A. (2016). On the age of information in status update systems with packet management. IEEE Transactions on Information Theory, 62(4), 1897-1910.
Feng, Z., Xu, W., & Cao, J. (2024). Distributed Nash Equilibrium Computation Under Round-Robin Scheduling Protocol. IEEE Transactions on Automatic Control, 69(1), 339-346.
Gonzalez, M.N., Cristina, Melo de Brito Carvalho T., & Christian, M.C. (2017). Cloud Resource Management: Towards Efficient Execution of Large-Scale Scientific Applications and Workflows on Complex Infrastructures. Journal of Cloud Computing, 6(13), 1-20.
Hatami, M., Leinonen, M., & Codreanu, M. (2021). AoI Minimization in Status Update Control with Energy Harvesting Sensors. IEEE Transactions on Communications, 69(12), 8335-8351.
Hu, Z., & Li, D. (2022). Improved Heuristic Job Scheduling Method to Enhance Throughput for Big Data Analytics. Tsinghua Science and Technology, 27(2), 344-357.
Huang, H., Ye, Q., & Zhou, Y. (2022). Deadline-Aware Task Offloading with Partially-Observable Deep Reinforcement Learning for Multi-Access Edge Computing. IEEE Transactions on Network Science and Engineering, 9(6), 3870-3885.
Islam, M.T., Karunasekera, S., & Buyya, R. (2022). Performance and Cost-Efficient Spark Job Scheduling Based on Deep Reinforcement Learning in Cloud Computing Environments. IEEE Transactions on Parallel and Distributed Systems, 33(7), 1695-1710.
Jayanetti, A., Halgamuge, S., & Buyya, R. (2024). Multi-Agent Deep Reinforcement Learning Framework for Renewable Energy-Aware Workflow Scheduling on Distributed Cloud Data Centers. IEEE Transactions on Parallel and Distributed Systems, 35(4), 604-615.
Jhunjhunwala, P.R., Sombabu, B., & Moharir, S. (2020). Optimal AoI-Aware Scheduling and Cycles in Graphs. IEEE Transactions on Communications, 68(3), 1593-1603.
Kadota, I., Sinha, A., Uysal-Biyikoglu, E., Singh, R., & Modiano, E. (2018). Scheduling Policies for Minimizing Age of Information in Broadcast Wireless Networks. IEEE/ACM Transactions on Networking, 26(6), 2637-2650.
Khan, S.G., Herrmann, G., Lewis, F.L., Tony, P., & Melhuish, C. (2012). Reinforcement learning and optimal adaptive control: An overview and implementation examples. Annual Reviews in Control, 36(1), 42-59.
Li, C., Huang, Y., Li, S., Chen, Y., Jalaian, B.A., & Hou, Y.T. (2021). Minimizing AoI in a 5G-based IoT Network under Varying Channel Conditions. IEEE Internet of Things Journal, 8(19), 14543-14558.
Li, R., Ma, Q., Gong, J., Zhou, Z., & Chen, X. (2021). Age of processing: Age-Driven Status Sampling and Processing Offloading for Edge-Computing-Enabled Real-Time IoT Applications. IEEE Internet of Things Journal, 8(19), 14471-14484.
Moltafet, M., Leinonen, M., & Codreanu, M. (2020). On the age of information in multi-source queueing models. IEEE Transactions on Communications, 68(8), 5003-5017.
Nie, L., Wang, X., Sun, W., Li, Y., Li, S., & Zhang, P. (2021). Imitation-learning-enabled Vehicular Edge Computing: Toward Online Task Scheduling. IEEE network, 35(3), 102-108.
Pal, S., Jhanjhi, N.Z., Abdulbaqi, A.S., Akila, D., Alsubaei, F.S., & Almazroi, A.A. (2023). An Intelligent Task Scheduling Model for Hybrid Internet of Things and Cloud Environment for Big Data Applications. Sustainability, 15(6), article no. 5104.
Park, B.S., Lee, H., Lee, H.T., Eun, Y., Jeon, D., Zhu, Z., Lee, H., & Jung, Y.C. (2018). Comparison of First-Come First-Served and Optimization Based Scheduling Algorithms for Integrated Departure and Arrival Management. in 2018 Aviation Technology, Integration, and Operations Conference, pp. 3842.
Petrillo, A., Pescapé, A., & Santini, S. (2021). A Secure Adaptive Control for Cooperative Driving of Autonomous Connected Vehicles in the Presence of Heterogeneous Communication Delays and Cyberattacks. IEEE Transactions on Cybernetics, 51(3), 1134-1149.
Qin, Z., Wei, Z., Qu, Y., Zhou, F.H., Wang, H., Ng, D.W.K. (2023). AoI-Aware Scheduling for Air-Ground Collaborative Mobile Edge Computing. IEEE Transactions on Wireless Communications, 22(5), 2989-3005.
Sahni, J., & Vidyarthi, D.P. (2018). A Cost-Effective Deadline-Constrained Dynamic Scheduling Algorithm for Scientific Workflows in a Cloud Environment. IEEE Transactions on Cloud Computing, 6(1), 2-18.
Singh, A.K., Leech, C., Reddy, B.K., Al-Hashimi, B.M., & Merrett, G.V. (2017). Learning-based Run-Time Power and Energy Management of Multi/Many-Core Systems: Current and Future Trends. Journal of Low Power Electronics, 13(3), 310-325.
Song, J., Gunduz, D., & Choi, W. (2024). Optimal Scheduling Policy for Minimizing Age of Information with A Relay. IEEE Internet of Things Journal, 11(4), 5623-5637.
Tao, Y., Qiu, J., & Lai, S. (2022). A Hybrid Cloud and Edge Control Strategy for Demand Responses Using Deep Reinforcement Learning and Transfer Learning. IEEE Transactions on Cloud Computing, 10(1), 56-71.
Ullah, I., Lim, H.K., Seok, Y.J., & Han, Y.H. (2023). Optimizing Task Offloading and Resource Allocation in Edge-Cloud Networks: A DRL Approach. Journal of Cloud Computing, 12(1), article no. 112.
Wang, B., Liu, F., & Lin, W. (2021). Energy-Efficient VM Scheduling Based on Deep Reinforcement Learning. Future Generation Computer Systems, 125, 616-628.
Wu, H., Zhang, Z., Guan, C., Wolter, K., & Xu, M.X. (2020). Collaborate Edge and Cloud Computing with Distributed Deep Learning for Smart City Internet of Things. IEEE Internet of Things Journal, 7(9), 8099-8110.
Xu, C., Yang, H.H., Wang, X., & Quek, T.Q.S. (2020). Optimizing Information Freshness in Computing-Enabled IoT Networks. IEEE Internet of Things Journal, 7(2), 971-985.
Yates, R.D., Sun, Y., Brown, D.R., Kaul, S.K., Modiano, E., & Ulukus, S. (2021). Age of Information: An Introduction and Survey. IEEE Journal on Selected Areas in Communications, 39(5), 1183-1210.
Belgacem, A., Mahmoudi, S., & Kihl, M. (2022). Intelligent Multi-Agent Reinforcement Learning Model for Resources Allocation in Cloud Computing. Journal of King Saud University-Computer and Information Sciences, 34(6), 2391-2404.
Beloglazov, A., Abawajy, J., & Buyya, R. (2012). Energy-aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing. Future Generation Computer Systems, 28(5), 755-768.
Chang, T., Cao, X., & Zheng, W. (2024). A Lightweight Sensor Scheduler Based Oon AoI Function for Remote State Estimation over Lossy Wireless Channels. IEEE Transactions on Automatic Control, 69(3), 1697-1704.
Cheng, M., Li, J., & Nazarian, S. (2018). DRL-cloud: Deep Reinforcement Learning-Based Resource Provisioning and Task Scheduling for Cloud Service Providers. in 2018 23rd Asia and South pacific design automation conference (ASP-DAC), 129-134.
Costa, M., Codreanu, M., & Ephremides, A. (2016). On the age of information in status update systems with packet management. IEEE Transactions on Information Theory, 62(4), 1897-1910.
Feng, Z., Xu, W., & Cao, J. (2024). Distributed Nash Equilibrium Computation Under Round-Robin Scheduling Protocol. IEEE Transactions on Automatic Control, 69(1), 339-346.
Gonzalez, M.N., Cristina, Melo de Brito Carvalho T., & Christian, M.C. (2017). Cloud Resource Management: Towards Efficient Execution of Large-Scale Scientific Applications and Workflows on Complex Infrastructures. Journal of Cloud Computing, 6(13), 1-20.
Hatami, M., Leinonen, M., & Codreanu, M. (2021). AoI Minimization in Status Update Control with Energy Harvesting Sensors. IEEE Transactions on Communications, 69(12), 8335-8351.
Hu, Z., & Li, D. (2022). Improved Heuristic Job Scheduling Method to Enhance Throughput for Big Data Analytics. Tsinghua Science and Technology, 27(2), 344-357.
Huang, H., Ye, Q., & Zhou, Y. (2022). Deadline-Aware Task Offloading with Partially-Observable Deep Reinforcement Learning for Multi-Access Edge Computing. IEEE Transactions on Network Science and Engineering, 9(6), 3870-3885.
Islam, M.T., Karunasekera, S., & Buyya, R. (2022). Performance and Cost-Efficient Spark Job Scheduling Based on Deep Reinforcement Learning in Cloud Computing Environments. IEEE Transactions on Parallel and Distributed Systems, 33(7), 1695-1710.
Jayanetti, A., Halgamuge, S., & Buyya, R. (2024). Multi-Agent Deep Reinforcement Learning Framework for Renewable Energy-Aware Workflow Scheduling on Distributed Cloud Data Centers. IEEE Transactions on Parallel and Distributed Systems, 35(4), 604-615.
Jhunjhunwala, P.R., Sombabu, B., & Moharir, S. (2020). Optimal AoI-Aware Scheduling and Cycles in Graphs. IEEE Transactions on Communications, 68(3), 1593-1603.
Kadota, I., Sinha, A., Uysal-Biyikoglu, E., Singh, R., & Modiano, E. (2018). Scheduling Policies for Minimizing Age of Information in Broadcast Wireless Networks. IEEE/ACM Transactions on Networking, 26(6), 2637-2650.
Khan, S.G., Herrmann, G., Lewis, F.L., Tony, P., & Melhuish, C. (2012). Reinforcement learning and optimal adaptive control: An overview and implementation examples. Annual Reviews in Control, 36(1), 42-59.
Li, C., Huang, Y., Li, S., Chen, Y., Jalaian, B.A., & Hou, Y.T. (2021). Minimizing AoI in a 5G-based IoT Network under Varying Channel Conditions. IEEE Internet of Things Journal, 8(19), 14543-14558.
Li, R., Ma, Q., Gong, J., Zhou, Z., & Chen, X. (2021). Age of processing: Age-Driven Status Sampling and Processing Offloading for Edge-Computing-Enabled Real-Time IoT Applications. IEEE Internet of Things Journal, 8(19), 14471-14484.
Moltafet, M., Leinonen, M., & Codreanu, M. (2020). On the age of information in multi-source queueing models. IEEE Transactions on Communications, 68(8), 5003-5017.
Nie, L., Wang, X., Sun, W., Li, Y., Li, S., & Zhang, P. (2021). Imitation-learning-enabled Vehicular Edge Computing: Toward Online Task Scheduling. IEEE network, 35(3), 102-108.
Pal, S., Jhanjhi, N.Z., Abdulbaqi, A.S., Akila, D., Alsubaei, F.S., & Almazroi, A.A. (2023). An Intelligent Task Scheduling Model for Hybrid Internet of Things and Cloud Environment for Big Data Applications. Sustainability, 15(6), article no. 5104.
Park, B.S., Lee, H., Lee, H.T., Eun, Y., Jeon, D., Zhu, Z., Lee, H., & Jung, Y.C. (2018). Comparison of First-Come First-Served and Optimization Based Scheduling Algorithms for Integrated Departure and Arrival Management. in 2018 Aviation Technology, Integration, and Operations Conference, pp. 3842.
Petrillo, A., Pescapé, A., & Santini, S. (2021). A Secure Adaptive Control for Cooperative Driving of Autonomous Connected Vehicles in the Presence of Heterogeneous Communication Delays and Cyberattacks. IEEE Transactions on Cybernetics, 51(3), 1134-1149.
Qin, Z., Wei, Z., Qu, Y., Zhou, F.H., Wang, H., Ng, D.W.K. (2023). AoI-Aware Scheduling for Air-Ground Collaborative Mobile Edge Computing. IEEE Transactions on Wireless Communications, 22(5), 2989-3005.
Sahni, J., & Vidyarthi, D.P. (2018). A Cost-Effective Deadline-Constrained Dynamic Scheduling Algorithm for Scientific Workflows in a Cloud Environment. IEEE Transactions on Cloud Computing, 6(1), 2-18.
Singh, A.K., Leech, C., Reddy, B.K., Al-Hashimi, B.M., & Merrett, G.V. (2017). Learning-based Run-Time Power and Energy Management of Multi/Many-Core Systems: Current and Future Trends. Journal of Low Power Electronics, 13(3), 310-325.
Song, J., Gunduz, D., & Choi, W. (2024). Optimal Scheduling Policy for Minimizing Age of Information with A Relay. IEEE Internet of Things Journal, 11(4), 5623-5637.
Tao, Y., Qiu, J., & Lai, S. (2022). A Hybrid Cloud and Edge Control Strategy for Demand Responses Using Deep Reinforcement Learning and Transfer Learning. IEEE Transactions on Cloud Computing, 10(1), 56-71.
Ullah, I., Lim, H.K., Seok, Y.J., & Han, Y.H. (2023). Optimizing Task Offloading and Resource Allocation in Edge-Cloud Networks: A DRL Approach. Journal of Cloud Computing, 12(1), article no. 112.
Wang, B., Liu, F., & Lin, W. (2021). Energy-Efficient VM Scheduling Based on Deep Reinforcement Learning. Future Generation Computer Systems, 125, 616-628.
Wu, H., Zhang, Z., Guan, C., Wolter, K., & Xu, M.X. (2020). Collaborate Edge and Cloud Computing with Distributed Deep Learning for Smart City Internet of Things. IEEE Internet of Things Journal, 7(9), 8099-8110.
Xu, C., Yang, H.H., Wang, X., & Quek, T.Q.S. (2020). Optimizing Information Freshness in Computing-Enabled IoT Networks. IEEE Internet of Things Journal, 7(2), 971-985.
Yates, R.D., Sun, Y., Brown, D.R., Kaul, S.K., Modiano, E., & Ulukus, S. (2021). Age of Information: An Introduction and Survey. IEEE Journal on Selected Areas in Communications, 39(5), 1183-1210.