This study presents a novel resource scheduling framework for cloud computing environments that incorporates the Age of Information (AOI) metric into the decision-making process, enabling precise quantification and optimization of information freshness. The proposed framework leverages an enhanced deep reinforcement learning algorithm to adaptively learn optimal scheduling policies in dynamic cloud settings. We introduce a multidimensional reward function that not only considers traditional metrics such as resource utilization and task completion time but also integrates AOI as a core indicator, thereby achieving holistic optimization of information freshness at the system level. The method incorporates prioritized experience replay and n-step learning mechanisms, which enhance learning efficiency and policy stability. Extensive simulation experiments demonstrate that the framework maintains low average AOI under varying workloads while adhering to resource capacity and energy consumption constraints. This approach provides novel theoretical foundations and practical guidelines for improving real-time cloud service quality and facilitating timely decision-making in edge computing scenarios.