1 Introduction

Deep learning has been contributing to artificial intelligence (AI) systems to speed up and improve numerous tasks, including decision-making, predictions, identifying anomalies and patterns, and even recommendations and so on. Although the accuracy of deep learning models has dramatically improved during the last decade, this improved accuracy has often been achieved through increased model complexity, which may induce common sense mistakes in practice without providing any reasons for the mistakes, making it impossible to fully trust its decisions. It’s also challenging to achieve targeted model improvement and optimisation [1]. Without reliable explanations that accurately represent the current AI system processes, humans still consider AI untrustworthy due to a variety of dynamics and uncertainties [2] when deploying AI applications in real-world environments. This motivates the inherent need and expectation from human users that AI systems should be explainable to help confirm decisions.

Explainable AI (explainable artificial intelligence (XAI)) is often considered a set of processes and methods that are used to describe deep learning models, by characterizing model accuracy, transparency, and outcomes in AI systems [3]. XAI methods aim to provide human-readable explanations to help users comprehend and trust the outputs created by deep learning algorithms. Additionally, some regulations such as European General Data Protection Regulation (general data protection regulation (GDPR))[4] have been introduced to drive further XAI research, demanding the important ethics [5], justifications [6], trust [15]. In other domains, such as medical image or signal recognition, where accuracy is paramount, the focus may be more on predictive power than interpretability [16].

The current XAI methods exhibit various dimensions and descriptions to understand deep learning models and some survey papers [3, 17, 18] have summarized the methods and basic differences among different XAI approaches. However, the state-of-the-art analysis with respect to existing approaches and limitations for different XAI-enabled application domains still lacks investigation.

The field of explainable artificial intelligence (XAI) has witnessed the emergence of numerous methods and techniques aimed at comprehending the intricate workings of deep learning models. Currently, some survey papers have made efforts to summarize these methods and offer a fundamental understanding of the distinctions among various XAI approaches [3, 17, 18]. However, while certain survey papers have focused on specific domains like healthcare [19] or medical applications [20], there still exists a substantial gap in the state-of-the-art analysis pertaining to the existing approaches and their limitations across all XAI-enabled application domains. This gap necessitates a comprehensive investigation encompassing various aspects such as different requirements, suitable XAI approaches, and domain-specific limitations. Conducting such an analysis is crucial as it allows us to gain a deeper understanding of the performance of XAI techniques in real-world scenarios. Additionally, it helps us identify the challenges and opportunities that arise when applying these approaches to different application domains. By bridging this gap, we can make significant strides towards develo** more effective and reliable XAI systems tailored to specific domains and their unique characteristics.

In this survey, our primary objective is to provide a comprehensive overview of explainable artificial intelligence (XAI) approaches across various application domains by exploring and analysing the different methods and techniques employed in XAI and their application-specific considerations. We achieve this by utilizing three well-defined taxonomies, as depicted in Fig. 1. Unlike many existing surveys that solely focus on reviewing and comparing methods, we go beyond that by providing domain map**. This map** provides insights into how XAI methods are interconnected and utilized across various application domains, and even in cases where domains intersect. Additionally, we delve into a detailed discussion on the limitations of the existing methods, acknowledging the areas where further improvements are necessary. Lastly, we summarize the future directions in XAI research, highlighting potential avenues for advancements and breakthroughs. Our contributions in this survey can be summarized as follows:

  • Develop a new taxonomy for the description of XAI approaches based on three well-defined orientations with a wider range of explanation options;

  • Investigate and examine various XAI-enabled applications to identify the available XAI techniques and domain insights through case studies;

  • Discuss the limitations and gaps in the design of XAI methods for the future directions of research and development.

In order to comprehensively analyze XAI approaches, limitations, and future directions from application perspectives, our survey is structured around two main themes, as depicted in Fig. 1. The first theme focuses on general approaches and limitations in XAI, while the second theme aims to analyze the available XAI approaches and domain-specific insights.

Under each domain, we explore four main sub-themes: problem definition, available XAI approaches, case studies, and domain insights. Before delving into each application domain, it is important to review the general taxonomies of XAI approaches. This provides a foundation for understanding and categorizing the various XAI techniques. In each domain, we discuss the available and suitable XAI approaches that align with the proposed general taxonomies of XAI approaches. Additionally, we examine the domain-specific limitations and considerations, taking into account the unique challenges and requirements of each application area. We also explore cross-disciplinary techniques that contribute to XAI innovations. The findings from these discussions are summarized as limitations and future directions, providing valuable insights into current research trends and guiding future studies in the field of XAI.

Fig. 1
figure 1

The proposed organization to discuss the approaches, limitations and future directions in XAI

2 Taxonomies of XAI Approaches

2.1 Review Scope and Execution

This work is mainly based on a scope of review refers to the specific boundaries and focus of the research being conducted. In the context of an XAI survey, the scope typically includes the following aspects:

  • XAI approaches: The review will focus on examining and analyzing different XAI approaches and methods that have been proposed in the literature. This include visualization techniques, symbolic explanations, ante-hoc explanations, post-hoc explanations, local explanations, global explanations and any other relevant techniques.

  • Application domains: The review may consider various application domains where XAI techniques have been applied, including medical and biomedical, healthcare, finance, law, cyber security, education and training, civil engineering. The scope involve exploring the usage of XAI techniques in these domains and analyzing their effectiveness and limitations across multiple domains.

  • Research papers: The review will involve studying and synthesizing research papers that are relevant to the chosen scope. These papers may include original research articles, survey papers and scholarly publications that contribute to the understanding of XAI approaches and their application in the selected domains through case studies.

  • Limitations and challenges: The scope also encompass examining the limitations and challenges of existing XAI methods and approaches. This could involve identifying common issues, gaps in the literature, and areas that require further research or improvement.

Having the scope of review established, the selected databases and a search engine include Scopus, Web of Science and Google Scholar (Search engine) and ar**v between 2013 and 2023. The search terms based on the scopes are:

  • XAI keywords: explainable, XAI, interpretable.

  • Review keywords: survey, review, overview, literature, bibliometric, challenge, prospect, trend, insight, opportunity, future direction.

  • Domain keywords: medical, biomedical, healthcare, wellness, civil, urban, transportation, cyber security, information security, education, training, learning and teaching, coaching, finance, economics, law, legal system.

With the selected search terms, the two-round search strings were designed to effectively retrieve relevant information and narrow down the search results.

The first round, focusing on general research papers, consisted of the following search string: (explainable OR XAI OR interpretable) AND (survey OR review OR overview OR literature OR bibliometric OR challenge OR prospect OR trend OR opportunity OR "future direction").

The second round, aimed at selecting specific application domains, utilized the following search string: (explainable OR XAI OR interpretable) AND (medical, biomedical OR healthcare OR wellness OR civil OR urban OR transportation OR “cyber security” OR “information security” OR education OR training OR “learning and teaching” OR coaching OR finance OR economics OR law OR “legal system”).

Publications that did not clearly align with the scopes based on their title or abstract were excluded from this review. While not all literature explicitly stated this information, the extracted data was organized and served as the foundation for our analysis.

2.2 XAI Approaches

The taxonomies in the existing survey papers generally categorised XAI approaches based on scope (local or global) [21], stage (ante-hoc or post-hoc) [17] and output format (numerical, visual, textual or mixed) [22]. The main difference between the existing study and our survey is that this paper focuses on the human perspective involving source, representation, and logic reasoning. We summarise the taxonomies categorised in this survey in Fig. 2:

Fig. 2
figure 2

Taxonomies of XAI approaches in this survey

Source-oriented (source-oriented (SO)) the sources that support building explanations can be either subjective (S) or objective (O) cognition, depending on whether the explanations are provided based on the fact or human experience. For example, in the medical field, if the explanation of a diagnosis is provided based on the patient’s clinical symptoms and explains the cause and pathology in detail during the AI learning process, this is from objective cognitive concern. In contrast, explanations with subjective cognitive consider patients’ current physical conditions and doctors’ medical knowledge.

Representation-oriented (representation-oriented (RO)) core representation among the XAI approaches can generally be classified into visualisation-based (V), symbolic-based (S) or even hybrid (H) methods. Visual-based methods are the most common representation ways including input visualisation and model visualisation. Input visualisation methods provide an accessible way to view and understand how input data affect model outputs, while model visualisation methods provide analysis based on the aspect of layers or features inside the model.

Besides visualization-based methods, other formats of explanations, including numerical, graphical, rules, and textual explanations, are covered in symbolic-based methods. Symbolic-based methods tend to describe the process of deep learning models by extracting insightful information, such as meaning and context, and representing them in different formats. The coding symbolic-based explanation is provided directly from the factual features, including numerical, graphical and textual explanations. For instance, a numerical method [36] performs explanation by highlighting the important regions in the image, which refers to objective cogitative. Some researchers also consider using subjective sources, such as in [85], authors presented the explanation by combining time series, histopathological images, knowledge databases as well as patient histories.

In terms of representation-oriented, visualisation methods emphasise the visualisation of training data rules and the visualisation inside the model, which is the most popular XAI approaches used in medical image analysis. Some typical examples include attributed-based and perturbation-based methods for model-agnostic explanations as well as CAM-based and concept attribution for model-specific explanations. Locally-interpretable model-agnostic explanations (locally-interpretable model-agnostic explanations (LIME)) [86] is utilised to generate explanations for the classification of medical image patches. Zhu et al. [87] used rule-based segmentation and perturbation-based analysis to generate the explanation for visualising the importance of each feature in the image. The concept attribution [37] is introduced by quantifying the contribution of features of interest to the CNN network’s decision-making. Symbolic methods focus on the symbolic information representations that simulate the doctor’s decision-making process with natural language, along with the generated decision results, such as primary diagnosis reports, etc. For example, Kim et al.[66] introduced concept activation vectors (CAVs), which provided textual interpretation of neural network internal state with user-friendly concepts. Lee et al. [73] provided explainable computer-aided diagnoses by combining a visual pointing map and diagnostic sentences based on the predefined knowledge base.

In terms of logical-oriented, explanations focused on end-end logic reasoning, such as the above-mentioned LIME, perturbation-based methods are utilised to explain the relationship between input medical images and predicted results. For example, a linear regression model is embedded into LIME [86] to identify relevant regions by plotting heat maps with varying color scales. Zhang et al. [3.7 Civil Engineering

3.7.1 Problem Definition

AI systems used in civil engineering research have a significant impact on the decision-making processes in road transport and power systems. In particular, autonomous driving techniques in road transport and power system analysis and power systems are the common areas used deep learning techniques, such as navigation and path planning, scene recognition, lane and obstacle detection, as well as planning, monitoring, and controlling the power system [150, 184].

In the field of autonomous driving, deep learning techniques are normally utilised to recognize scenes for digital images [184, 185]. While in the field of power system analysis, deep learning techniques are used to extract features from the underlying data for power system management, such as power grid synthesis, state estimation, and photovoltaic (PV) power prediction [150, 186]. Deep learning explainable techniques are used to automatically extract abstract features of images or depth non-linear features of underlying data through end-to-end predictive processing to obtain results, which is not sufficient to provide the evidence to trust and accept the result of autonomous driving and power system management. For example, one can use traffic lights and signal recognition for driving planning, in which the traffic lights at crosswalks and intersections are an essential function in following traffic rules and preventing traffic accidents. Deep learning methods have achieved prominence in traffic sign and light recognition, but they are hard to explain the correlation between inputs and outputs and lack an explanation to support reasoning in driving planning studies [187]. In power system management, deep learning methods may mislead the output explanations of power stability to provide unreliable recommendations, so explanations can increase user trust [150].

3.7.2 XAI Based Proposals

XAI can improve the management of autonomous driving and power system, providing an effective interaction to promote smart civil engineering. Deep learning interpretation research in autonomous driving and power systems is a common interpretable deep learning method because it is not only influenced by data, but also relates to expert knowledge and ethical principles.

In terms of source-oriented, objective interpretability obtains visible or measurable results from 2D and 3D images or underlying datasets, while subjective interpretability requires consideration of the knowledge from automotive or electrical experts and the ethical standards of their fields. Currently, XAI proposals include objective and subjective cognitive aspects. For example, CAM, as an objective cognition method, is used to explain the highlight of important regions in 2D or 3D images. Time series, 2D images, 3D images, Lidar images, knowledge databases and ethical criteria are utilised as subject sources to explain the model [147, 185, 187].

In terms of representation-oriented, visual interpretation is the highest level semantics to understand which parts of the image impact the model, emphasing on visual structure of data and model, which is the primary XAI method used in autonomous driving. These XAI methods can be divided into gradient-based and back propagation-based. Gradient-based interpretation methods include CAM, and its enhanced variants such as Guided Grad-CAM, Grad-CAM, Grad-CAM++ and Smooth Grad CAM++. CAM can highlight the discriminative regions of a scene image used for scene detection [147]. Backpropagation-based methods contain guided backpropagation, layered relevance propagation, visual backprop and deep lift. Visual Backprop shows which input pixels set contributes to steering self-driving cars [144]. Symbolic interpretation uses understandable language to provide evidence for result recommendations in autonomous driving and power system management. In autonomous driving, proposed AI methods make decisions according to traffic rules. For example, “the traffic light ahead turned red,” thus “the car stopped” [185]. In power system management, it uses the data gathered from occupant actions for resources such as room lighting to forecast patterns of energy resource usage [188]. Hybrid interpretation combines visual interpretation and symbolic interpretation to provide steering determination in autonomous driving. For example, Berkeley Deep Drive-X (BDD-X) is introduced in autonomous driving which includes the description of driving pictures and annotations for textual interpretation [49].

In terms of logical-oriented, the end-end explanations are used to explain the relationship between input images including obstacle and scene images and the prediction. For example, LIME is utilised to explain the relationship between input radar image and prediction results [189]. Middle-end explanations reveal reasons behind the autoencoder-based assessment model and how they can help drivers reach a better understanding and trust in the model and its results. For example, a rule-based local surrogate interpretable method is proposed, namely MuRLoS, which focuses on the interaction between features [149]. Correlation expatriation is used in the risk management of self-driving and power systems. For example, SHAP is used to assess and explain collision risk using real-world driving data for self-driving [190].

3.7.3 Cases Studies

Decisive vehicle actions Decisive vehicle actions in autonomous driving are based on multiple tasks, such as scene recognition, obstacle detection, lane recognition, and path planning. It can use attention mechanisms, heat maps, diagnostic models and texture descriptions to recognise obstacles, scenes and lanes and steer the car operation [147, 185, 187]. As mentioned before, CAM is used to highlight the main area for recognition [63]. Visual Backprop, unlike CAM-based, emphases highlighting pixel-level to filter features of scene images [144]. Grad-CAM is combined with existing fine-grained visualisations to provide a high-resolution class-discriminative visualisation [36]. Visual attention heat maps are used to explain the vehicle controller behaviour through segmenting and filtering simpler and more accurate maps while not degrading control accuracy [145]. A neural motion planner uses 3D detection instances with descriptive information for safe driving [146]. An interpretable tree-based representation as hybrid presentations combines rules, actions, and observation to generate multiple explanations for self-driving [147]. An architecture is used for joint scene prediction to explain object-induced actions [149]. An auto-discern system utilises surroundings observations and common-sense reasoning with answers for driving decisions [148].

Power system management Power system management normally consists of stability assessment, emergency control, power quality disturbance, and energy forecasting. CNN classifier, combined with non-intrusive load monitoring (NILM), is utilised to estimate the activation state and provide feedback for the consumer-user [150]. The shape method is firstly used in emergency control for reinforcement learning for grid control (RLGC) under three different outputs analysis [151]. Deep-SHAP is proposed for the under-voltage load shedding of power systems, and it adds feature classification of the inputs and probabilistic analysis of the outputs to increase clarity [152].

3.7.4 Domain-Specific Insights

In terms of transportation systems, operators, such as drivers and passengers, are the primary end-users in scenarios involving decisive vehicle actions, because they may want to comprehend the reasoning behind the decisions made by the autonomous system. It is very important in high-stake domains which human lives are risk. XAI can provide explanations for AI decisions to enhance the system more transparent and fostering trust. Real-time explanations pose a significant challenge for XAI in decisive vehicle actions, because decisions need to be made with fractions of a second. Rapidly changing environments such as weather conditions, pedestrian movement and other vehicles actions promote XAI should ideally make quick and accurate decisions. Moreover, every driving situation can be unique. XAI needs suitable for diversity situation and adapt its explanations which based on context-aware interoperability. As previously mentioned, XAI demands more computational resources because of real-time explanations based on timely response. Moreover, deceive vehicle actions require high dimensional sensor data, such as the inputs from LiDAR and stereo cameras, which lead the methods, like LIME and SHAP, which adopts approximate local decision boundaries, are expensive for computation and especially for high-dimensional inputs. The requirements in XAI that can generate real-time, informative explanations without overburdening the computational resources of the system.

In terms of infrastructure system management, such as power or water system management, general public, including governments and residents, are the key end-users in power system management. Government bodies want to oversee the safe and fair use of AI in power system management. Meanwhile, residents may be curious about the mechanics of AI used to manage power systems in the city. XAI can be used to evaluate AI systems for safety, fairness, transparency, and adherence to regulatory requirements. Interpretation complexity is a primary challenge for XAI in infrastructure system management due to the multidimensional nature of the data, which includes factors from power generators, transmission lines, and power consumers. Moreover, unlike the case of autonomous driving, power system operations demand more technical expertise and need to adhere to various regulatory requirements. Consequently, XAI is not only to provide coherent and insightful interpretations of the system’s operations but also to demonstrate that these operations comply with all relevant regulations. The entire process in infrastructure system management is starting from generation and distribution to monitor consumer usage patterns. The complexity is future amplified by the demands for load balancing and power outages, which influences the public life and the city operation. Moreover, it also need to fix the various regulations and standers. To evidence such compliance, XAI may need to generate more complex or detailed explanations, thus increasing the computational cost.

3.8 Cross-Disciplinary Techniques for XAI Innovations

XAI innovations for cross-disciplinary refers to the advancements and developments in explainable AI (XAI) that span multiple domains and disciplines. It involves the integration and adaptation of XAI techniques and methodologies to address complex problems and challenges that arise in diverse fields.

One aspect of XAI Innovations for cross-disciplinary is the exploration and utilization of common XAI techniques across different domains. These techniques, such as attention-based models, model-agnostic methods, and rule-based methods, can be applied to various fields to provide transparent and interpretable explanations for AI models. Below are some examples of common XAI techniques:

  1. 1.

    Regression-based partitioned methods:can be applied to any black-box model. For example, LIME approximates the decision boundaries of the model locally and generates explanations by highlighting the features that contribute most to the prediction for a specific instance. LIME can be used in domains such as healthcare, cyber security, finance, or education to provide instance-level interpretability and explainability. SHAP is another common technique based on cooperative game theory, which can be applied to different domains to explain the importance of features in the decision-making process. For example, in medical diagnostics, SHAP can help understand which medical parameters or biomarkers have the most impact on a particular diagnosis.

  2. 2.

    Feature importance: Feature importance techniques assess the relevance and contribution of each feature in the model’s predictions. Methods like permutation importance, Gini importance, or gain-based importance are commonly used. Feature importance can be useful in various domains to identify the factors that drive specific outcomes or decisions. For instance, in finance, feature importance can help understand which financial indicators or market factors play a crucial role in investment decisions.

  3. 3.

    Partial dependence plots: Partial dependence plots visualize the relationship between a feature and the model’s output while holding other features constant. These plots show how changing the value of a specific feature affects the model’s predictions. Partial dependence plots can be employed in domains such as healthcare, where they can provide insights into the impact of certain medical treatments or interventions on patient outcomes.

  4. 4.

    Rule-based models: Rule-based models provide transparent and interpretable decision-making processes by expressing decision rules in the form of “if-then” statements. These models can be used in various domains to generate explanations that are easily understandable by humans. In legal applications, rule-based models can help explain legal reasoning by map** legal principles and regulations to decision rules.

These are just a few examples of common XAI techniques that can be applied across different domains. The choice of technique depends on the specific requirements and characteristics of each domain. We summarise some typeical suitable XAI approaches for each domain shown in Table 5. By leveraging these techniques, domain experts and practitioners can gain insights into the inner workings of AI models and make informed decisions based on understandable and interpretable explanations.

Another aspect of XAI innovations for cross-disciplinary involves the development of domain-specific XAI approaches. In Table 5, we summarize some typical suitable XAI approaches for different domains. These approaches can be tailored to the unique characteristics and requirements of specific domains, taking into account the specific challenges and complexities of each field. Domain-specific XAI approaches consider various factors, including domain knowledge, regulations, and ethical considerations, to create an XAI framework that is specifically designed for a particular domain. By incorporating domain expertise and contextual information, these approaches provide explanations that are not only interpretable but also relevant and meaningful within their respective domains.

By tailoring XAI approaches to specific domains, practitioners can gain deeper insights into the behavior of AI models within the context of their field. This not only enhances transparency and trust in AI systems but also enables domain-specific considerations to be incorporated into the decision-making process, ensuring the explanations are relevant and aligned with the requirements and constraints of each domain.

Table 5 XAI suitability analysis for application domains

Furthermore, XAI innovations for cross-disciplinary emphasize the importance of collaboration and the integration of expertise from different fields. This approach recognizes that the challenges and complexities of XAI extend beyond individual domains and require a multidisciplinary perspective. Collaboration and integration of expertise enable a holistic approach to XAI, where insights from different disciplines can inform the development of innovative and effective solutions. For example, in the field of healthcare, collaboration between medical practitioners, data scientists, and AI researchers can lead to the development of XAI techniques that not only provide interpretable explanations but also align with medical guidelines and regulations. This integration of expertise ensures that the explanations generated by XAI systems are not only technically sound but also relevant and meaningful in the specific healthcare context.

Similarly, in the domain of cybersecurity, collaboration between cybersecurity experts, AI specialists, and legal professionals can lead to the development of XAI techniques that address the unique challenges of cybersecurity threats. By combining knowledge from these different fields, XAI systems can provide interpretable explanations that enhance the understanding of AI-based security measures, assist in identifying vulnerabilities, and facilitate decision-making processes for cybersecurity professionals.

The collaboration and integration of expertise from different fields also foster a cross-pollination of ideas and perspectives, driving innovation and the development of novel XAI techniques. By leveraging the diverse knowledge and experiences of experts from various domains, XAI can evolve and adapt to meet the evolving needs and challenges of different industries and societal contexts.

4 Discussion

As the concerns on explainability and the attentions for XAI, regulations such as GDPR set out the transparency rules about the data processing. As most modern AI systems are data-driven AI, these requirements are actually applicable to all application domains. Not only the explainability is necessary, but also the way of explaining is required.

In this section, we will summarize the limitations of existing XAI approaches based on the above review in each application domain, and identify future research directions.

4.1 Limitations

Adaptive integration and explanation: many existing approaches provide explanations in a generic manner, without considering the diverse backgrounds (culture, context, etc.) and knowledge levels of users. This one-size-fits-all approach can lead to challenges in effective comprehension for both novice and expert users. Novice users may struggle to understand complex technical explanations, while expert users may find oversimplified explanations lacking in depth. These limitations hinder the ability of XAI techniques to cater to users with different levels of expertise and may impact the overall trust and usability of the system. Furthermore, the evaluation and assessment of XAI techniques often prioritize objective metrics, such as fidelity or faithfulness, which measure how well the explanations align with the model’s internal workings. While these metrics are important for evaluating the accuracy of the explanations, they may not capture the subjective aspects of user understanding and interpretation. The perceived quality of explanations can vary among users with different expertise levels, as well as under different situations or conditions.

Interactive explanation: in the current landscape of XAI research, there is recognition that a single explanation may not be sufficient to address all user concerns and questions in decision-making scenarios. As a result, the focus has shifted towards develo** interactive explanations that allow for a dynamic and iterative process. However, there are challenges that need to be addressed in order to effectively implement interactive explanation systems. One of the key challenges is the ability to handle a wide range of user queries and adapt the explanations accordingly. Users may have diverse information needs and may require explanations that go beyond superficial or generic responses. In particular, addressing queries that involve deep domain knowledge or intricate reasoning processes can be complex and requires sophisticated techniques. Another challenge is striking a balance between providing timely responses to user queries and maintaining computational efficiency. Interactive explanation systems need to respond quickly to user interactions to facilitate a smooth and engaging user experience. However, generating accurate and informative explanations within a short response time can be demanding, and trade-offs may need to be made depending on the specific domain and computational resources available. Moreover, the design and implementation of interactive explanation systems should also consider the context and domain-specific requirements. Different domains may have unique challenges and constraints that need to be taken into account when develo** interactive explanations. It is important to ensure that the interactive explanation systems are tailored to the specific domain and can effectively address the needs of users in that context.

Connection and consistency in hybrid explanation: in the context of hybrid explanations in XAI, it is crucial to ensure connection and consistency among different sources of explanations. Hybrid approaches aim to leverage multiple techniques to provide users in various domains with different application purposes, achieving robustness and interpretability. However, it is necessary to address potential conflicts and ensure coordinated integration of different components within these hybrid systems. Currently, many works focus on combining various explanation techniques to complement each other and enhance overall system performance. While this integration is valuable, it is important to acknowledge that different techniques may have inherent differences in their assumptions, methodologies, and outputs. These differences can result in conflicts or inconsistencies when combined within a hybrid explanation system. Therefore, careful attention should be given to the design of complex hybrid explanation systems. The structure and architecture need to be thoughtfully planned to ensure seamless connections between components. This involves identifying potential conflicts early on and develo** strategies to resolve them. Additionally, efforts should be made to establish a unified framework that allows for effective coordination and integration of the different techniques used in the hybrid system. Furthermore, the evaluation and validation of hybrid explanation systems should include assessing the consistency of explanations provided by different sources. This evaluation process helps identify any discrepancies or inconsistencies and guides the refinement of the system to ensure a coherent and unified user experience.

Balancing model interpretability with predictive accuracy: currently, researchers are develo** hybrid approaches that aim to strike a better balance between interpretability and accuracy, such as using post-hoc interpretability techniques with complex models or designing new model architectures that inherently provide both interpretability and high accuracy.However, they also come with their own limitations. Post-hoc interpretability techniques generate explanations after the model has made its predictions, which means they do not directly influence the model’s decision-making process. As a result, the explanations may not capture the full complexity and nuances of the model’s internal workings. Furthermore, post-hoc techniques can be computationally expensive and may not scale well to large datasets or complex models with high-dimensional inputs. Designing new model architectures such as rule-based models or attention mechanisms in neural networks may struggle to capture complex interactions and may require a significant amount of manual rule engineering. It is crucial to recognize that there is no universal solution to the interpretability-accuracy trade-off. The choice of approach depends on the specific requirements of the application, available resources, and acceptable trade-offs in the given context. Researchers and practitioners must carefully consider the limitations and benefits of different techniques to strike an appropriate balance based on their specific use cases.

Long-term usability and maintainability: the current XAI methods face several limitations when deployed in real-world scenarios. One significant limitation is the need for continuous explanation updates. XAI systems generate explanations based on training data, and as the underlying AI models or data evolve, the explanations may become outdated or less accurate. To ensure relevance and usefulness, XAI systems should be designed to incorporate mechanisms for updating explanations to reflect the latest model updates or data changes. Another limitation is the assumption of stationary data distributions. XAI methods are typically trained on historical data, assuming that the future data will follow a similar distribution. However, if the data distribution changes over time, the performance of the XAI system may deteriorate. Adapting XAI methods to handle shifting data distributions is essential for maintaining their effectiveness and ensuring reliable explanations in dynamic environments. Scalability is another crucial consideration, particularly for large-scale AI systems. XAI techniques that work well on small-scale or controlled datasets may face challenges when applied to large-scale AI systems with complex models and massive amounts of data. Efficient algorithms and sufficient computational resources are necessary to handle the increased computational demands of explaining large-scale AI systems without sacrificing performance or usability.

4.2 Future Directions

To address the first limitation, building the context-awareness XAI is important, we need to explore how to generate explanations by considering mission contexts (surrounding environment, situations, time-series datasets.), map** user roles (end-user, domain expert, business manager, AI developer, etc.) and targeted goals (refine the model, debugging system errors, detecting bias, understand AI learning process, etc.) regardless of the type of AI system. So far, most of these studies were still conceptual with limited consideration, the more general context-driven systems and practical implementations will be an important direction for future research.

Secondly, interactive explanations (e.g., conversation system Interfaces, games, using audio, visuals, video, etc.) should be explored further. This is a promising approach to building truly human-centred explanations by identifying users’ requirements and providing better human-AI collaboration. These incorporating theories and frameworks allow an iterative process from humans, which is a crucial aspect of building successful XAI systems.

Finally, the hybrid explanation should be applied by concerning fusing heterogeneous knowledge from different sources, managing time-sensitive data, inconsistency, uncertainty, etc. Among these conditions, hybrid explanation has been an interesting and increasing topic in recent years. This will also involve a wide range of criteria and strategies that target a clear structure and consensus on what constitutes success and trustworthy explanations.

5 Conclusion

This paper addresses a wide range of explainable AI topics. XAI is a rapidly growing field of research, as it fills a gap in current AI approaches, allowing people to better understand AI models and therefore trust their outputs. By summarising the current literature, we have proposed a new taxonomy for XAI from the human perspective. The taxonomy considers source-oriented, representation-oriented aspect, and logic-oriented perspectives.

It is very important that we have elaborated on the applications of XAI in multiple areas in the paper, including medical, healthcare, cybersecurity, finance and law, education and training, and civil engineering. We provide a comprehensive review of different XAI approaches and identify the key techniques for case studies. Finally, we discuss the limitations of existing XAI methods and present several corresponding areas for further research: (1) context-awareness XAI, (2) interactive explanations, and (3) hybrid explanations.

Overall, this paper provides a clear survey of the current XAI research and application status from the human perspective. We hope this article will provide a valuable reference for XAI-related researchers and practitioners. We believe XAI will build a bridge of trust between humans and AI.