This research paper investigates the integration of Explainable Artificial Intelligence (XAI) into Predictive Maintenance (PdM) systems, aiming to enhance transparency, interpretability, and reliability in industrial applications. The primary contribution is the introduction of the Explainability Parameters (XPA) framework, which offers a structured methodology for evaluating and applying XAI in PdM. The study systematically reviews recent advancements and challenges in the literature, categorising explanations into pre-modelling, in-modelling, and post-modelling processes. It presents and analyses significant case studies across various industrial sectors to illustrate the practical implications and hurdles of XAI methodologies. Key findings indicate that while XAI significantly improves the effectiveness and trustworthiness of PdM by clarifying model predictions, its implementation is hindered by the complexity of industrial data and the absence of standardised evaluation methods. The XPA framework addresses these challenges by providing tailored metrics for specific applications and advocating for a multi-phase approach to convert technical outputs into actionable maintenance recommendations. The originality of this paper lies in its comprehensive review and the establishment of rigorous standards for assessing XAI methodologies, thereby bridging the gap between theoretical frameworks and practical applications. By promoting adaptable XAI frameworks that cater to real-world industrial needs, this study fosters trust in automated decision-making processes. It enhances the overall understanding of XAI's role in PdM.
