1 Introduction

Graph Neural Networks (GNNs) are a widely used class of machine learning models. Since graphs occur naturally in several domains such as chemistry, biology, and medicine, GNNs have experienced widespread adoption. Following a trend toward building more interpretable machine learning models, there have been numerous recent proposals to provide explanations for GNNs. Most of the existing approaches provide post-hoc explanations starting from an already trained GNN to identify edges and node attributes that explain the model’s prediction. However, as highlighted in Faber et al. (2021), there might be some discrepancy between the ground-truth explanations and those attributed to the trained GNNs. Indeed, post-hoc explanations are often not able to faithfully represent the mechanisms of the original model (Rudin, 2018). Unfortunately, the very definition of what constitutes a faithful explanation is still open to debate and there exist several competing positions on the matter. Recent work has also shown that post-hoc attribution methods are often not better than random baselines on the standard evaluation metrics for explanation accuracy and faithfulness (Agarwal et al., 2022b). Much fewer approaches have considered the problem of GNN explainability from an intrinsic perspective. In contrast to post-hoc methods, approaches with built-in interpretability provide explanations during training by introducing new mechanisms, e.g. prototypes (Zhang et al., 2022), stochastic attention (Miao et al., 2022), or graph kernels (Feng et al., 2022a). Nonetheless, the introduction of new mechanisms to compute graph representations differ from standard GNNs computations. Therefore, the reasoning process of the above interpretable networks differ from the original GNN architectures making them not faithful by design. Our intent, instead, is to generate explanations for standard GNNs by kee** the computations as faithful as possible compared to the original network.

A recently proposed alternative to post-hoc methods is the learning to explain (L2X) paradigm (Chen et al., 2018). The core difference to post-hoc methods is that the models are trained to, in the forward pass, discretely select a small subset of the input features as well as the parameters of a downstream model that uses only the selected features to make a prediction. The selected features are, therefore, faithful by design as they are the only ones used by the downstream model. Since the subset of features is sampled discretely, L2X requires a method for computing gradients of an expectation over a discrete probability distribution. Chen et al. (2018) proposed a gradient estimator based on a relaxation of the discrete samples and tailored to the k-subset distribution. However, since the original work only considers the case of selecting exactly-k features, directly applying prior methods to the graph learning tasks is not possible and requires significant changes. Thus, since prior work’s gradient estimators do not work with arbitrary optimization problems but are restricted to the k-subset distribution, using the L2X paradigm for graphs is highly non-trivial.

With this work, we bring the L2X paradigm to graph representation learning. The important ingredient is a recently proposed method for computing gradients of an expectation over a complex exponential family distribution (Niepert et al., 2021). The method facilitates approximate gradient backpropagation for models combining continuously differentiable GNNs with a black-box solver of combinatorial problems defined on graphs. Crucially, this allows us to learn to sample subgraphs with beneficial properties such as being connected and sparse. Contrary to prior work, this also creates a dependency between the random variables representing the presence of edges. The proposed framework L2xGnn, therefore, learns to select explanatory subgraph motifs and uses these and only these motifs for its message-passing operations. To the best of our knowledge, this is the first method for learning to explain on standard GNNs. The proposed framework is extensible as it can work with any optimization algorithm for graphs imposing properties on the sampled subgraphs.

We compare two different sampling strategies for obtaining sparse subgraph explanations resulting from two optimization problems on graphs: (1) the maximum-weight k-edge subgraph and (2) the maximum-weight k-edge connected subgraph problem. In line with Faber et al. (2021), we decided to focus on explaining edges since the latter provide a more fine-grained information compared to nodes. We show empirically that L2xGnn, when combined with a base GNN, does not lose accuracy on several benchmark datasets. Moreover, we evaluate the explanations quantitatively and qualitatively. We also analyze the ability of L2xGnn to help in detecting shortcut learning which can be used for debugging the GNN. Given the characteristics of the proposed method, our work improves model interpretability and increases the clarity of known black-box models, as GNNs, while maintaining competitive predictive capabilities.

2 Background

Let \(\mathcal {G}(V, E)\) be a graph with \(n=|V|\) the number of nodes. Let \(\textbf{X} \in \mathbb {R}^{n \times d}\) be the feature matrix that associates each node of the graph with a d-dimensional feature vector and let \(\textbf{A} \in \mathbb {R}^{n \times n}\) be the adjacency matrix. GNNs have three computations based on the message passing paradigm (Hamilton et al., 2017) which is defined as

$$\begin{aligned} \textbf{h}_i^{\ell } = \gamma \left( \textbf{h}_i^{\ell -1}, \square _{j \in \mathcal {N}(v_i)} \phi \left( \textbf{h}_i^{\ell -1}, \textbf{h}_j^{\ell -1}, r_{ij} \right) \right), \end{aligned}$$
(1)

where \(\gamma \), \(\square \), and \(\phi \) represent update, aggregation and message function respectively.

Propagation step The message-passing network computes a message \(m_{ij}^{\ell } = \phi (\textbf{h}_i^{\ell -1}, \textbf{h}_j^{\ell -1}, r_{ij})\) between every pair of nodes \((v_i, v_j)\). The function takes in input \(v_i\)’s and \(v_j\)’s representations \(\textbf{h}_i^{\ell -1}\) and \(\textbf{h}_j^{\ell -1}\) at the previous layer \(\ell - 1\), and the relation \(r_{ij}\) between the two nodes.

Aggregation step For each node in the graph, the network performs an aggregation computation over the messages from \(v_i\)’s neighborhood \(\mathcal {N}(v_i)\) to calculate an aggregated message \(M_i^\ell = \square (\{m_{ij}^\ell \mid v_j \in \mathcal {N}(v_i)\})\). The definition of the aggregation function differs between methods (Hamilton et al., 2017; Veličković et al., 2018; Xu et al., 2018; Duval & Malliaros, 2021).

Update step Finally, the model non-linearly transforms the aggregated message \(M_i^\ell \) and \(v_i\)’s representation from previous layer \(\textbf{h}_i^{\ell -1}\) to obtain \(v_i\)’s representation at layer \(\ell \) as \(\textbf{h}_i^{\ell } = \gamma (M_i^\ell,\textbf{h}_i^{\ell -1})\). The final embedding for node \(v_i\) after L layers is \(\textbf{z}_i = \textbf{h}_i^L\) and is used for node classification tasks. For graph classification, an additional readout function aggregates the node representations to obtain a graph representation \(\textbf{h}_G\). This function can be any permutation invariant function or a graph-level pooling function (Ying et al., 2018; Zhang et al., 2018; Lee et al., 2019). For Graph Isomorphism Networks (GINs) (Xu et al., 2018), for instance, the message passing operation for node \(v_i\) is

$$\begin{aligned} \textbf{h}_i^{\ell } = \gamma ^{\ell } \left( \left( 1 + \epsilon ^{\ell } \right) \cdot \textbf{h}_i^{\ell -1} + \sum _{j \in \mathcal {N}(v_i)} \textbf{h}_j^{\ell -1} \right), \end{aligned}$$
(2)

where \(\gamma \) represents a multi-layer perceptron (MLP), and \(\epsilon \) denotes a learnable parameter. We will write \(\textbf{H}_\ell = \textsc {Gnn}_\ell (\varvec{A}, \textbf{H}_{\ell -1})\) as a shorthand for the application of the \(\ell ^{\text {th}}\) layer of the GNN under consideration.

3 Related work

There are several methods to explain the behavior of GNNs. Following Yuan et al. (2022), explanatory methods for GNNs can be divided into several categories.

Gradient-based methods (Pope et al., 2019; Baldassarre & Azizpour, 2019; Sanchez-Lengeling et al., 2020). The main idea is to compute the gradients of the target prediction with respect to the corresponding input data. The larger the gradient values, the higher the importance of the input features.

Perturbation-based methods (Ying et al., 2019; Luo et al., 2020; Schlichtkrull et al., 2021; Yuan et al., 2021; Perotti et al., 2023). Here the objective is to study the models’ output behavior under input perturbations. When the input is perturbed and we obtain an output comparable to the original one, we can conclude that the perturbed input information is not important for the current input. Inspired by causal inference methods, Lin et al. (2021), Lucic et al. (2022), Tan et al. (2022) attempt to provide explanations based on factual and counterfactual reasoning.

Surrogate methods (Huang et al., 2022; Vu & Thai, 2020; Duval & Malliaros, 2021; Gui et al., 2022). First, these approaches generate a local dataset comprised of data points in the neighboring area of the input. The local dataset is assumed to be less complex and, consequently, can be analyzed through a simpler model. Then, a simple and interpretable surrogate model is used to capture local relationships that are used as explanations for the predictions of the original model.

Decomposition methods (Schwarzenberg et al., 2019; Schnake et al., 2021; Feng et al., 2022b). These methods use decomposition rules to decompose the model predictions leading back to the input space. The prediction is considered as the target score. Then, starting from the output layer, the target score is decomposed at each preceding layer according to the decided decomposition rules. In this way, the initial target score is distributed among the neurons at every layer. Finally, the decomposed terms obtained at the input layer are associated to the input features and used as importance scores of the corresponding nodes and edges.

Model-level methods (Yuan et al., 2020). Different from the instance-level methods above, these methods provide a general and high-level understanding of the models. In the context of GNNs, they aim at studying the input patterns that would lead to a certain target prediction. The generated explanations are general and provide a global understanding of the trained GNNs.

Prototype-based methods Zhang et al. (2022) propose ProtGNN, a new explanatory method based on prototypes to provide built-in explanations, overcoming the limitations of post-hoc techniques. The explanations are obtained following case-based reasoning, where new instances are compared with several learned prototypes.

Concept-based methods Magister et al. (2021) propose CGExplainer, a post-hoc explanatory methods for human-in-the-loop concept discovery. This concept representation learning method extracts concept-based explanations that allow the end-user to analyze predictions with a global view.

Among the methods categorized above, a similar approach in intent is presented in Schlichtkrull et al. (2021). The authors propose a post-hoc technique that learns how to remove the unnecessary edges through layer-wise edge masking. There are two main differences compared to our work: (1) the edge masking is learned from an already trained model, while we learn the edges to remove during training; (2) the edges are treated as independent binary random variables. In our case, instead, the optimization algorithm allows us to model the dependencies between edge variables.

Additional works face the explainability problem from different perspectives as explanation supervision (Gao et al., 2021), neuron analysis (Xuanyuan et al., 2023), and motif-based generation (Yu & Gao, 2022). For a comprehensive discussion on methods to explain GNNs, we refer the reader to the survey (Yuan et al., 2022). In the following subsections, we provide a more detailed comparison with inherent interpretable methods and graph structure learning approaches.

3.1 Comparison with non-post-hoc methods

Among the plethora of post-hoc methods for graphs, ProtGNN (Zhang et al., 2022), KerGNN (Feng et al., 2022a), and GSAT (Miao et al., 2022) are noteworthy exceptions. The first approach proposes a framework to generate explanations by comparing input graphs with prototypes learned during training, The second one combines graph kernels with the message passing paradigm to learn hidden graph filters. The latter, instead, leverages stochastic attention to select task-relevant subgraphs for interpretation. Although they all provide built-in explanations, given the introduction of new mechanisms to compute graph representations that differ from standard GNNs computations, the aforementioned approaches are not faithful by design (i.e., they do not reflect the reasoning process of the original backbone architecture). In contrast to these methods, our approach relies solely on standard GNNs, making it suitable to explain them faithfully. Additionally, in terms of explanatory capability, the learned prototypes are not directly interpretable and need to be matched to the closest training subgraphs to be human-understandable. Graph filters, instead, do not necessarily match existing patterns in the instance-based case. In both cases, the output can only provide a general idea of the important structures used by the model for prediction but fail at revealing precisely the instance-level explanation for each input graph.

3.2 Comparison with graph structure learning approaches

Recently, there have been related methods for learning the structure of graph neural networks. Following the taxonomy proposed in Zhu et al. (2021), the structure learning methods most related to L2xGnn fall into the postprocessing category, and more specifically, under the discrete sampling subcategory. All existing methods use variants of the Gumbel-softmax trick which is limited in modeling complex distributions. Moreover, only when the straight-through version of the Gumbel-softmax trick is used, one can obtain truly discrete and not merely relaxed adjacency matrices in the forward pass. In contrast, L2xGnn always samples purely discrete adjacency matrices. It is, to the best of our knowledge, the only method that allows us to model complex dependencies between the edge variables through its ability to integrate a combinatorial optimization algorithm on graphs. Other strategies include sampling edges between each pair of nodes from a Bernoulli distribution (Franceschi et al., 2019) or sampling subgraphs for subgraph aggregation methods in a data-driven manner (Qian et al., 2022). All these methods, however, are not concerned with the problem of explaining the behavior of GNNs explicitly.

3.3 Limitations of prior work

When explaining GNNs, we distinguish between how the dataset was constructed and how the GNN makes its predictions. We refer to a responsible motif when a dataset is created such that the presence or absence of it determines the class label of the graphs. Hence, the responsible motif represents the underlying evidence (ground truth) allowing us to discriminate among the labels that we hope the explanatory method will find (Faber et al., 2021). Instead, when a motif is responsible for the prediction of a certain class label, we refer to the edges present in the motif as the ones causing the prediction (causing motif). Existing XAI methods for GNNs have several limitations and can lead to inconsistencies. In fact, there could be a mismatch between the responsible motif (ground truth), the actual motif used by the pre-trained model for its prediction (causing motif), and the one identified by the explanatory model (explanatory motif) (Duval & Malliaros, 2021; Faber et al., 2021). In contrast, in our work, we know that the prediction of the class label is caused by the explaining motif, as its selection by the upstream model caused the downstream model to make said prediction. As anticipated, we focus on the problem of identifying a subset of the edges as an explanation of the model’s message-passing behavior. Hence, an explanation is equivalent to identifying a mask for the adjacency matrix of the original graph. Intuitively, an explanation can be accurate and/or faithful. It is accurate if it succeeds in identifying the edges in the input graph responsible for the graph’s class label—i.e., if the explanatory motif matches the responsible motif. This property can, for example, be evaluated with synthetic data where the class label of a graph is determined by the presence or absence of a particular substructure. An explanation is faithful if the edges identified as the explanation cause the prediction of the GNN on an input graph—i.e., if the explanatory motif matches the causing motif. Contrary to measuring accuracy, there is no consensus on evaluating faithfulness.

Recent work has proposed to measure unfaithfulness as the difference between the predictions of (1) the GNN on a perturbed adjacency matrix and (2) the GNN on the same perturbed adjacency matrix with edges removed by the explanation mask (Pope et al., 2019; Agarwal et al., 2022b, a). We believe that this definition is problematic as the perturbation is typically implemented using a swap operation which replaces two existing edges (ab) and (cd) with two new edges (ac) and (bd). Hence, these new edges are present in the unmasked adjacency matrix but not present in the masked one. It is, however, natural that the same GNN would predict highly different label distributions on these two graphs. For instance, consider a chemical compound where we remove and add new bonds. The resulting compounds and their properties can be chemically very different. Hence, contrary to prior work, we define a subgraph to be a faithful explanation, if it is a significantly smaller subgraph of the input graph and we know that only its structure is used in the message-passing operations of equation (1).

Fig. 1
figure 1

Workflow of the proposed approach. The upstream model \(h_{\varvec{v}}\) learns to assign weights \(\theta _{\cdot,\cdot }\) for each edge in the input graph. The edge matrix \(\varvec{\theta }\)—perturbed with \(\varvec{\epsilon }\)—is then utilized as input by the optimization algorithm \(\texttt{opt}\) to sample a subgraph \(\varvec{z}\) with specific characteristics. Finally, the downstream model \(f_{\varvec{u}}\) uses only the information about the sampled (sub)graph to make a prediction

4 Learning to explain graph neural networks

We propose a method that learns both (i) the parameters of a graph generative model and (ii) the parameters of a GNN operating on sparse subgraphs approximately sampled from said generative model in the forward pass. In line with prior work on learning to explain (Chen et al., 2018), the maximum probability subgraph is then used at test time to make the prediction and, therefore, serves as the faithful explanation. Since we aim to sample graphs with certain properties (e.g., connected subgraphs) we need a new approach to sampling and gradient estimation. Contrary to prior work on edge masking (Schlichtkrull et al., 2021) which treats edges as independent binary random variables, we use a recently introduced method for backpropagating through optimization algorithms. This allows us to select subgraphs with specific properties and, therefore, to explicitly model dependencies between edge variables.

Intuitively, our approach consists of three main components. In the first part, an upstream model \(h_{\varvec{v}}\) learns the edge weights \(\theta _{(i,j)}\) for each edge (ij) belonging to the given input graph. In the subsequent component, the learned edge matrix \(\varvec{\theta }\) is given as input to an optimization algorithm \(\texttt{opt}\). The algorithm considers the weights \(\varvec{\theta }\) as unnormalized probabilities to sample discretely a new adjacency matrix \(\varvec{Z}\). Finally, the resulting sampled subgraph \(\varvec{z}\) is used in the last component, the downstream model \(f_{\varvec{u}}\), to make the final prediction. A graphical representation of our approach is presented in Fig. 1. Considering the proposed workflow, we can identify two main challenges related to our method: (a) how to learn \(\varvec{\theta }\) such that we can improve the selection of the subgraph \(\varvec{z}\); (b) how to estimate and backpropagate the gradient through a discrete component (i.e., \(\texttt{opt}\)). In the following subsections, we will explain our framework in more detail and provide technical solutions for the introduced challenges. In Sect. 4.1, we formalize the problem and describe rigorously our framework. In Sect. 4.2, we describe the gradient estimation method used in this work. Finally, in Sect. 4.3, we detail how to use and adapt the introduced concepts to work explaining GNNs.

4.1 Problem statement and framework

We aim to jointly learn the parameters of a probability distribution over subgraphs with certain properties and the parameters of a GNN operating on graphs sampled from said distribution in the context of the graph classification problem. Here, the training data consists of a set of triples \(\{(\textbf{A}, \textbf{X}, \textbf{y})_j\}, j \in \{1,..., N\}\), where \(\textbf{A}\) is an \(n \times n\) binary adjacency matrix, \(\textbf{X} \in \textbf{R}^{n \times d}\) a node attribute matrix with d the number of node attributes, and \(\textbf{y}\) the target graph label. First, we have a learnable function \(h_{\varvec{v}}: \mathcal {A} \times \mathcal {X} \rightarrow \Theta \) where \(\mathcal {A}\) is the set of all \(n \times n\) adjacency matrices, \(\mathcal {X}\) the set of all attribute matrices, \(\varvec{v}\) are the parameters of h, and \(\Theta \) the set of possible edge parameter values. The function, which we refer to as the upstream model, maps the adjacency and attribute matrix to a matrix of edge weights \(\varvec{\theta }\in \mathbb {R}^{n \times n}\). Intuitively, \(\varvec{\theta }_{i,j}\) is the prior probability of edge (ij).

Next, we assume an algorithm \(\texttt{opt}: \Theta \rightarrow \mathcal {A}\) which returns the (approximate) solutions to an optimization problem on edge-weighted graphs. Examples of such optimization problems are the maximum-weight spanning tree or the maximum-weight k-edge connected subgraph problems. The optimization algorithm is treated as a black box. One can choose the optimization problem according to the application’s requirements. We have found, for instance, that the connected subgraphs lead to better explanations in the domain of chemical compound classification. Contrary to prior work, the optimization problem creates a dependency between the binary variables modeling the edges.

For every binary adjacency matrix \(\varvec{Z}\in \mathcal {A}\), we write \(\varvec{Z}\in \mathcal {F}\) if and only if the adjacency matrix is a feasible solution (not necessarily an optimal one) of the chosen optimization problem. We can now define a discrete exponential family distribution as

$$\begin{aligned} p(\varvec{Z}; \varvec{\theta }) = \left\{ \begin{array}{ll} \exp \left( \langle \varvec{Z}, \varvec{\theta }\rangle _{F}-B(\varvec{\theta }) \right) &{} \text {if } \varvec{Z}\in \mathcal {F}, \\ 0 &{} \text {otherwise.} \end{array} \right. \end{aligned}$$
(3)

where \(\langle \cdot, \cdot \rangle _{F}\) is the Frobenius inner product and \(B(\varvec{\theta })\) is the log-partition function defined as

$$B(\varvec{\theta }) = \log \left( \sum _{\varvec{Z}\in \mathcal {F}} \exp \left( \langle \varvec{Z}, \varvec{\theta }\rangle _{F} \right) \right) .$$

Hence, p is a probability distribution over adjacency matrices that are feasible solutions to the optimization problem under consideration. Each feasible adjacency matrix’s probability mass is proportional to the product of its edge weights. For example, if the optimization problem is the maximum-weight k-edge connected subgraph problem, the distribution assigns a non-zero probability mass to all adjacency matrices of graphs that have k edges and are connected.

Given an optimization problem, we would like to sample exactly from the above probability distribution \(p(\varvec{Z}; \varvec{\theta })\). Unfortunately, this is intractable since computing the log-partition function is in general NP-hard. However, as in prior work (Niepert et al., 2021), we can use perturb-and-MAP (Papandreou & Yuille, 2011) to approximately sample from the above distribution as follows. Let \(\varvec{\epsilon }\sim \rho (\varvec{\epsilon })\) be a \(n \times n\) matrix of appropriate random variables such as those following the Gumbel distribution. We can then approximately sample an adjacency matrix \(\varvec{Z}\) from \(p(\varvec{Z}; \varvec{\theta })\) by computing

$$\varvec{Z}= \texttt{opt}(\varvec{\theta }+ \varvec{\epsilon }).$$

Hence, we can approximately sample by perturbing the edge weights (unnormalized probabilities) \(\varvec{\theta }\) and by applying the optimization algorithm to these perturbed weights.

In the final part of the model (the downstream model), we use the sampled \(\varvec{Z}\) as the input adjacency matrix to a message-passing neural network \(f_{\varvec{u}}: \mathcal {A} \times \mathcal {X} \rightarrow \mathcal {Y}\) computing \(\hat{\varvec{y}} = f_{\varvec{u}}(\varvec{Z}, \varvec{X})\).

In summary, we have the following model architecture for training input data \((\varvec{A}, \varvec{X}, \varvec{y})\):

$$\begin{aligned} \varvec{\theta }&= h_{\varvec{v}}(\varvec{A}, \varvec{X})&\text{ with } \ \varvec{A}\in \mathcal {A}, \varvec{X} \in \mathcal {X}, \end{aligned}$$
(4)
$$\begin{aligned} \varvec{Z}&= \texttt{opt}(\varvec{\theta }+ \varvec{\epsilon })&\text{ with } \ \varvec{\epsilon }\sim \rho (\epsilon ), \varvec{\epsilon }\in \mathbb {R}^{n \times n}, \end{aligned}$$
(5)
$$\begin{aligned} \hat{\varvec{y}}&= f_{\varvec{u}}(\varvec{Z}, \varvec{X})&\text{ with } \ \hat{\varvec{y}} \in \mathcal {Y}, f_{\varvec{u}}: \mathcal {A} \times \mathcal {X} \rightarrow \mathcal {Y}. \end{aligned}$$
(6)

Figure 1 illustrates the architecture. With \(\varvec{\omega } = (\varvec{u},\varvec{v})\) the learnable parameters of the model and the target variable \(\varvec{y}\) the loss is now defined as:

$$\begin{aligned} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega }) = \mathbb {E}_{\varvec{\epsilon }\sim \rho (\epsilon )}[\ell (f_{\varvec{u}}(\varvec{Z}, \varvec{X}),\varvec{y})], \end{aligned}$$
(7)

with \(\varvec{Z}= \texttt{opt}(\varvec{\theta }+ \varvec{\epsilon }) \), \(\varvec{\theta }=h_{\varvec{v}}(\varvec{A},\varvec{X})\), and \(\ell : \mathcal {Y} \times \mathcal {Y} \rightarrow \mathbb {R}^{+}\) a point-wise loss function. The gradient of L wrt \(\varvec{u}\) is

$$\begin{aligned} \nabla _{\varvec{u}} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega }) = \mathbb {E}_{} [\partial _{\varvec{u}}f_{\varvec{u}}(\varvec{Z}, \varvec{X})^{\intercal } \nabla _{\varvec{y}} \ell (\hat{\varvec{y}},\varvec{y})] \end{aligned}$$

which can be estimated by Monte-Carlo sampling. In contrast, the gradient of L with respect to \(\varvec{v}\) is:

$$\begin{aligned} \nabla _{\varvec{v}} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega }) = \partial _{\varvec{v}}h_{\varvec{v}}(\varvec{A}, \varvec{X})^{\intercal } \nabla _{\varvec{\theta }} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega }), \end{aligned}$$

where the challenge is to estimate \(\nabla _{\varvec{\theta }} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega }) = \nabla _{\varvec{\theta }}\mathbb {E}_{\varvec{\epsilon }\sim \rho (\epsilon )}[\ell (f_{\varvec{u}}(\varvec{Z}, \varvec{X}),\varvec{y})]\) because \(\varvec{Z}= \texttt{opt}(\varvec{\theta }+ \varvec{\epsilon })\) is not continuously differentiable wrt \(\varvec{\theta }\). While it would be possible to use the score function estimator, its high variance makes it less competitive in practice (Niepert et al., 2021).

4.2 Implicit maximum-likelihood learning

The variant of I-MLE we use in this work estimates \(\nabla _{\varvec{\theta }} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega })\) by implicitly creating a target distribution \(q(\varvec{Z}; \varvec{\theta }')\) using perturbation-based implicit differentiation (Domke, 2010). Here, the parameters \(\varvec{\theta }\) are moved in the direction of \(-\nabla _{\varvec{Z}} \ell (f_{\varvec{u}}(\varvec{A}, \varvec{X}),\varvec{y}))\), the negative gradient of the downstream loss with respect to the sampled adjacency matrix \(\varvec{Z}\), to construct \(\varvec{\theta }'\)

$$\begin{aligned} q(\varvec{Z};\varvec{\theta }^{\prime }) := p(\varvec{Z};\varvec{\theta } - \lambda \nabla _{\varvec{Z}} \ell (f_{\varvec{u}}(\varvec{Z}, \varvec{X}),\varvec{y})) \end{aligned}$$
(8)

with \(\varvec{Z}= \texttt{opt}(\varvec{\theta }+ \varvec{\epsilon })\) and \(\lambda > 0\) the strength of the perturbation. Intuitively, by moving the weights \(\varvec{\theta }\) into the direction of the negative gradients of \(\varvec{Z}\), the resulting distribution q is more likely to generate samples with a lower downstream loss. We approximate \(\nabla _{\varvec{\theta }} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega })\) with Monte Carlo estimates of the gradients of the KL divergence between p and q:

$$\begin{aligned} \nabla _{\varvec{\theta }} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega }) \approx \frac{1}{\lambda } \left( \texttt{opt}(\varvec{\theta }+ \varvec{\epsilon }) - \texttt{opt}(\varvec{\theta }^{\prime } + \varvec{\epsilon })\right) . \end{aligned}$$
(9)

In other words, \(\nabla _{\varvec{\theta }} L(\varvec{A}, \varvec{X}, \varvec{y};\varvec{\omega })\) is approximated by the difference between an approximate sample from \(p(\varvec{Z};\varvec{\theta })\) and an approximate sample from \(q(\varvec{Z};\varvec{\theta }^{\prime })\). In this way we move the distribution \(p(\varvec{Z};\varvec{\theta })\) closer to \(q(\varvec{Z};\varvec{\theta }^{\prime })\).

4.3 L2XGNN: learning to explain GNNs with I-MLE

We now describe the class of L2xGnn models we use in the experiments. First, we need to define the function \(h_{\varvec{v}}(\varvec{A}, \varvec{X})\). Here we use a standard GNN (see Eq. 1) to compute for every node i and every layer \(\ell \) the vector representation \(\textbf{h}_i^{\ell } = h_{\varvec{v}}(\varvec{A}, \varvec{X})_{i,1:d}\). We then compute the matrix of edge weights by taking the inner product between each pair of node embeddings. More formally, we compute \(\varvec{\theta }_{i,j} = \langle \textbf{h}^{\ell }_i, \textbf{h}_j^{\ell } \rangle \) for some fixed \(\ell \). Typically, we choose \(\ell =1\).

In this work, we sample the noise perturbations \(\varvec{\epsilon }\) from the sum of Gamma distribution (Niepert et al., 2021). Other noise distributions such as the Gumbel distribution are possible.

4.3.1 Sampling constrained subgraphs

An advantage of the proposed method is its ability to integrate any graph optimization problem as long as there exists an algorithm \(\texttt{opt}\) for computing (approximate) solutions. In this work, we focus on two optimization problems: (1) The maximum-weight k-edge subgraph and (2) the maximum-weight k-edge connected subgraph problems. The former aims to find a maximum-weight subgraph with k edges. The latter aims to find a connected maximum-weight subgraph with k edges. Other optimization problems are possible but we found that sparse and connected subgraphs provide a good efficiency-effectiveness trade-off.

Computing maximum weight k-edge subgraphs is highly efficient as we only need to select the k edges with the maximum weights. In order to compute connected k-edge subgraphs we use a greedy approach. First, given a number k of edges, we select a single edge \(e_{i,j}\) with the highest weight \(\varvec{\theta }_{i,j}\) from the input graph. At every iteration of the algorithm, we select the next edge such that it (a) is connected to a previously selected edge and (b) has the maximum weight among all those connected edges. A more detailed description of the greedy algorithm is given in Algorithm 1.

Algorithm 1
figure a

Greedy algorithm \(\texttt{opt}\) for the maximum-weight k-edge connected subgraph problem.

Finally, we need to define the function \(f_{\varvec{u}}\) (the downstream function) of the proposed framework. Here, we again use a message-passing GNN that follows the update rule

$$\begin{aligned} \textbf{h}_i^{\ell } = \gamma \left( \textbf{h}_i^{\ell -1}, \square _{j \in \mathcal {N}(v_i)} \phi \left( \textbf{h}_i^{\ell -1}, \textbf{h}_j^{\ell -1}, r_{ij} \right) \right) . \end{aligned}$$
(10)

The neighborhood structure \(\mathcal {N}(\cdot )\), however, is defined through the output adjacency matrix \(\varvec{Z}\) of the optimization algorithm \(\texttt{opt}\)

$$\begin{aligned} j \in \mathcal {N}(v_i) \Longleftrightarrow \varvec{Z}_{i, j} = \varvec{Z}_{j, i} = 1. \end{aligned}$$
(11)

Hence, if after the subgraph sampling, there exists a node \(v_i\) which is an isolated node in the adjacency matrix \(\varvec{Z}\), that is, \(\varvec{Z}_{i,j} = \varvec{Z}_{j,i} = 0 \ \forall j \in \{1,..., n\}\), the embedding of the node will not be updated based on message passing steps with neighboring nodes. This means that, for isolated nodes, the only information used in the downstream model is the one from the nodes themselves. Conceptually, \(\varvec{Z}\) works as a mask over the messages \(m_{ij}^\ell \) computed at each layer \(\ell \).

The adjacency matrix \(\varvec{Z}\) is then used in all subsequent layers of the GNN. In particular, for one layer \(\ell \) we have

$$\begin{aligned} \textbf{H}_\ell = \textsc {Gnn}_\ell (\varvec{A}\odot \varvec{Z}, \textbf{H}_{\ell -1}), \end{aligned}$$
(12)

where \(\odot \) is the Hadamard product. Finally, the remaining part of the L2xGnn network for the graph classification is

$$\begin{aligned} \textbf{h}_{G} = \text {Pool}(\textbf{H}_\ell ) \qquad \hat{\varvec{y}} = \eta (\textbf{h}_G), \end{aligned}$$
(13)

where we use a global pooling operator to generate the (sub)graph representation \(\textbf{h}_G\) that will then be used by the MLP network \(\eta (\cdot )\) to output a probability distribution \(\hat{\varvec{y}}\) over the class labels. Finally, a loss function is applied whose gradients are used to perform backpropagation. At test time, we use the maximum-probability subgraph for the explanation and prediction, that is, we do not perturb at test time.

5 Experiments

First, we evaluate the predictive performance of the model compared to baselines. Second, we qualitatively and quantitatively analyze the explanatory subgraphs for datasets for which we know the ground-truth motifs. Finally, we analyze whether the generated output can be helpful for model debugging purposes. Additionally, we report several ablation studies to investigate the effects of different model choices on the predictive and explanatory performance of our approach. For the remainder of the manuscript, we use L2xGnn\(_\texttt {dsc}\) and L2xGnn for referring to the maximum-weight k-edge subgraph and to the maximum-weight k-edge connected subgraph problem respectively. The code for reproducing our experiments is available here.

5.1 Datasets and settings

5.1.1 Datasets

To understand the change in the predictive capabilities of the base models when integrating L2xGnn, we use six real-world datasets from different domains (biology, social networks) for graph classification tasks: MUTAG (Debnath et al., 1991), PROTEINS (Borgwardt et al., 2005), YEAST (Yan et al., 2008), IMDB-BINARY, IMDB-MULTI (Yanardag et al., 2015), and DD (Rossi & Ahmed, 2015). In Table 1, we report the statistics of the datasets used for graph classification tasks. For a comprehensive evaluation, we include datasets with different characteristics, such as a larger number of graphs or a larger number of nodes and edges.

Table 1 Statistics of the datasets

To quantitatively evaluate the quality of the explanations, we use datasets that include ground-truth edge masks. In particular, we use MUTAG\(_0\) and BA2Motifs. MUTAG\(_0\) is a dataset introduced in Tan et al. (2022) which contains the benzene-NO\(_2\) (i.e., a carbon ring with a nitro group (NO\(_2\)) attached) as the only discriminative motif between positive and negative labels. A graphical representation of the benzene-NO\(_2\) compound is given in Fig. 2. BA2Motifs is a synthetic dataset that was first introduced in Luo et al. (2020). The base graphs are Barabasi-Albert (BA) graphs. 50% of the graphs are augmented with a house-motif graphs, the rest with a 5-node cycle motif. The discriminative subgraph leading to different predictions is the motif attached to the BA graph.

5.1.2 Experimental settings

To evaluate the quality of our approach, we use L2xGnn with several GNN base models including GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018) and GraphSAGE (Hamilton et al., 2017). We compare the results when using the original model and when the same model is combined with our XAI method. For model selection and evaluation, to fairly compare the methods, we follow a previously proposed protocol.Footnote 1 We perform a 10-fold cross validation where the hyperparameter selection is done according to the validation accuracy. The selection is performed for the number of layers (L) [1, 2, 3, 4] and the number of hidden units (H) [16, 32, 64, 128]. For both parameters, the selected numbers represent a standard range of values to decide the characteristics of the backbone architecture. For a fair comparison with the backbone architectures, we select the best configuration for each dataset, and we integrate our approach into the best model. Instead of fixing a value k for each input graph, we compute k based on a ratio R of edges to be kept. Once the hyperparameters of the default model are found, we select the best ratio R (in terms of percentage of edges to keep) from the set of values [0.4, 0.5, 0.6, 0.7] based again on the validation accuracy. We do not include extreme values for two reasons: (1) smaller values for R lead to reduced predictive capabilities and not meaningful explanatory subgraphs; and (2) higher values would not remove enough edges compared to the original input. Finally, we choose the perturbation intensity \(\lambda \) from the values [10, 100, 1000] taken from the original paper (Niepert et al., 2021).

Experiments were run on a single Linux machine with Intel Core i7-11370 H @ 3.30GHz, 1 GeForce RTX 3060, and 16 GB RAM. The best hyperparameter configuration for each model and dataset used for graph classification tasks is reported in Table 2. First, for the backbone architectures, we consider the number of layers [1, 2, 3, 4] and the number of hidden units [16, 32, 64, 128]. Then, for L2xGnn, we select the ratio R from [0.4, 0.5, 0.6, 0.7] and the perturbation intensity \(\lambda \) from [10, 100, 1000].

Table 2 Hyperparameter settings for graph classification tasks

5.2 Empirical results

5.2.1 Graph classification comparison with Base GNNs

Following the experimental procedure proposed in Zhang et al. (2022), Table 3 lists the results of using L2xGnn with base GNN architectures for graph classification tasks. We observe that L2xGnn is competitive and often even outperforms the base GNN models on the benchmark datasets. The primary goal of this work is not to provide a better predictive model, but to provide faithful explanation masks while maintaining similar predictive performance. To prove this point, we perform a paired t-test via 5x2 cross-validation with significant level \(\alpha =0.05\) (Dietterich, 1998) (see “Appendix A.5” for more details). The test indicates there is no statistically significant difference between the base models and their explainable counterpart (either in the connected or disconnected version). This analysis is important since inherent interpretable networks are known for creating a trade-off with the predictive capabilities of the model, and practitioners may not be willing to sacrifice the prediction accuracy for increased transparency (Miao et al., 2022).

Table 3 Prediction test accuracy (%) for graph classification tasks over ten runs

5.2.2 Explanation accuracy

We compare the proposed method with popular post-hoc explanation techniques including GNN-Explainer (Ying et al., 2019), PGE-Explainer (Luo et al., 2020), GradCAM (Pope et al., 2019), GNN-LRP (Schnake et al., 2021), and SubgraphX (Yuan et al., 2021)Footnote 2. We train a 3-layer GIN for 200 epochs with hidden dimensions equal to 64 and a learning rate equal to 0.001. We save the best model according to the validation accuracy and we compare it with the post-hoc techniques. In our case, we integrate L2xGnn into the same architecture and learn the edge masking during training as described before. We report the graph classification results for the two datasets in the “Appendix”. In Table 4, we report the explanation accuracy evaluation with respect to the ground-truth motifs in comparison with post-hoc techniques for 5 different data splits. The explanation problem is formalized as a binary classification problem, where the edges belonging to the ground-truth motif are treated as positive labels. We observe that L2xGnn obtains better or the same results as the considered explanatory models. While for the post-hoc explanation techniques we cannot guarantee that the GNNs use exclusively the explanation subgraphs for the prediction (Yuan et al., 2022), our method, by providing faithful explanations, overcomes this limitation. It is exactly the provided explanation that is used in the message-passing operations of L2xGnn.

Table 4 Evaluation of explanation accuracy (%) on synthetic graph classification datasets using a 3-layer GIN architecture
Fig. 2
figure 2

Benzene-NO\(_2\) motif

5.2.3 Qualitative evaluation of the explanations

In Fig. 3, we present some of the subgraphs identified by L2xGnn when combined with two different base GNNs. Based on prior studies and chemical domain knowledge (Debnath et al., 1991; Lin et al., 2021; Tan et al., 2022), carbon rings (the black circles in the pictures) and \(\text {NO}_2\) groups are known to be mutagenic. Interestingly, we can notice that, when using the information of connected subgraphs, the models are able to recognize a complete carbon ring with a \(\text {NO}_2\) group in most of the cases. In some cases, the carbon ring is not complete, but the explanations are still helpful to understand which motifs are potentially important for the prediction. With the subscript dsc, we can observe the results of the sampling strategy when we do not require subgraphs to be connected. In this case, the carbon rings are not always identified. Instead, the \(\text {NO}_2\) group is always considered important for the prediction. More generally, as also reported in Yuan et al. (2021), studying connected subgraphs results in more natural motifs compared to the motifs obtained without the connectedness constraint. A visual comparison of the explanations generated by L2xGnn and by the baselines can be found in Fig. 6 in the “Appendix”.

Fig. 3
figure 3

Visualization of some of the subgraphs selected by L2xGnn for MUTAG\(_0\) on the test set. The solid edges represent the ones sampled by our approach. The subscript dsc indicates the maximum weight k-edge subgraph problem (i.e., possibly disconnected subgraphs). Black, blue, red, and gray nodes represent carbon (C), nitrogen (N), oxygen (O), and hydrogen (H) atoms respectively

Fig. 4
figure 4

Effect of the edge ratio on the prediction accuracy (%)

5.2.4 Ablation study

In Sect. 5.2.1, we compare the two sampling strategies. From the results, the connected sampling is able to get better results than the non-connected counterpart on most datasets. In fact, the connectivity of subgraphs is essential to grasp the complete information about the important patterns, especially for chemical compound data where connected atoms are usually expected to create molecules or chemical groups. This aspect is also supported by the results obtained in the explanation accuracy task, where the connected strategy returns better explanations for the chemical dataset. Additionally, as previously mentioned, evaluating connected structures rather than just important edges looks more natural and intelligible. In Fig. 4, we analyze the effect of the quantity of retained information on the prediction accuracy. A smaller ratio indicates that we retain fewer edges during training and, consequently, the resulting subgraphs are more sparse and, therefore, interpretable. As one can see, this affects the predictive capabilities only when R is small. Starting from \(R=0.5\), the ratio does not affect particularly the predictive capabilities of the model. In fact, for graph classification tasks, some of the information contained in the initial computational graph does not condition the prediction as the information may be redundant or noisy. For instance, considering the MUTAG dataset, we know that the initial graphs contain on average 20 edges. The discriminative motif benzene-\(\text {NO}_2\), instead, contains around 9 edges, meaning that we ideally need 50% of the original edges to obtain good results. This is in line with the findings of this analysis and the graph classification results previously reported in Tables 3 and 4.

5.2.5 Shortcut learning detection

By generating faithful subgraph explanations, our approach can be used to detect whether the predictive model is focusing on the expected features or if it is affected by shortcut learning. This is particularly important for GNNs, where seemingly small implementation differences can influence the learning process of the model (Schlichtkrull et al., 2021). To this end, we use the BA2Motifs dataset (Luo et al., 2020). We trained two different models, GCN and GIN, achieving a test accuracy of 0.67 and 1.0 respectively. Taking a closer look at the explanations of the first model, we observed that most of the correct predictions were (incorrectly) correlated with the cycle motif and that the explanations were similar to the ones reported in Fig. 5. The explanatory results show that the model is not learning the expected discriminative motifs and, consequently, the accuracy for the test set is poor. This insight can help users to change the configuration of the architecture or to use a different model (e.g., GIN). More generally, the results highlight that faithful explanations can facilitate model analysis and debugging.

Fig. 5
figure 5

Example of model reasoning understanding through the visualization of the generated explanations

6 Conclusion and limitations

We propose L2xGnn, a framework that can be integrated into GNN architectures to learn to generate explanatory subgraphs which are exclusively used for the models’ predictions. Our experimental findings demonstrate that the integration of L2xGnn with base GNNs does not affect the predictive capabilities of the model for graph classification tasks. Furthermore, according to the definition provided in the paper, the resulting explanations are faithful since the retained information is the only one used by the model for prediction. Hence, differently from most of the common techniques, our explanations reveal the rationale of the GNNs and can also be used for model analysis and debugging. A limitation of the approach is the reduced efficiency compared to baseline GNN models. Since we need to integrate an algorithm to compute (approximate) solutions to a combinatorial optimization problem, each forward-pass requires more time and resources. Moreover, depending on the choice of the optimization problem, we might not capture the structure of explanatory motifs required for the application under consideration.