![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
157 Result(s)
-
Chapter and Conference Paper
A Remark on Concept Drift for Dependent Data
Concept drift, i.e., the change of the data generating distribution, can render machine learning models inaccurate. Several works address the phenomenon of concept drift in the streaming context usually assumi...
-
Article
Open AccessIncremental permutation feature importance (iPFI): towards online explanations on data streams
Explainable artificial intelligence has mainly focused on static learning scenarios so far. We are interested in dynamic scenarios where data is sampled progressively, and learning is done in an incremental ra...
-
Article
Open AccessContrasting Explanations for Understanding and Regularizing Model Adaptations
Many of today’s decision making systems deployed in the real world are not static—they are changing and adapting over time, a phenomenon known as model adaptation takes place. Because of their wide reaching in...
-
Article
Metric Learning with Self-Adjusting Memory for Explaining Feature Drift
Lifelong and incremental learning constitute key algorithms when dealing with streaming data in possibly non-stationary environments. Because of their capability of adapting to varying model complexity, non-pa...
-
Article
Open AccessNovel transfer learning schemes based on Siamese networks and synthetic data
Transfer learning schemes based on deep networks which have been trained on huge image corpora offer state-of-the-art technologies in computer vision. Here, supervised and semi-supervised approaches constitute...
-
Article
Open AccessModularity in Nervous Systems—a Key to Efficient Adaptivity for Deep Reinforcement Learning
Modularity as observed in biological systems has proven valuable for guiding classical motor theories towards good answers about action selection and execution. New challenges arise when we turn to learning: T...
-
Chapter and Conference Paper
Adversarial Attacks on Leakage Detectors in Water Distribution Networks
Many Machine Learning models are vulnerable to adversarial attacks: One can specifically design inputs that cause the model to make a mistake. Our study focuses on adversarials in the security-critical domain ...
-
Chapter and Conference Paper
Measuring Fairness with Biased Data: A Case Study on the Effects of Unsupervised Data in Fairness Evaluation
Evaluating fairness in language models has become an important topic, including different types of measurements for specific models, but also fundamental questions such as the impact of pre-training biases in ...
-
Chapter and Conference Paper
Extending Drift Detection Methods to Identify When Exactly the Change Happened
Data changing, or drifting, over time is a major problem when using classical machine learning on data streams. One approach to deal with this is to detect changes and react accordingly, for example by retrain...
-
Chapter and Conference Paper
On the Change of Decision Boundary and Loss in Learning with Concept Drift
Concept drift, i.e., the change of the data generating distribution, can render machine learning models inaccurate. Many technologies for learning with drift rely on the interleaved test-train error (ITTE) as ...
-
Chapter and Conference Paper
Fairness-Enhancing Ensemble Classification in Water Distribution Networks
As relevant examples such as the future criminal detection software [1] show, fairness of AI-based and social domain affecting decision support tools constitutes an important area of research. In this contributio...
-
Chapter and Conference Paper
iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams
Existing methods for explainable artificial intelligence (XAI), including popular feature importance measures such as SAGE, are mostly restricted to the batch learning scenario. However, machine learning is of...
-
Chapter and Conference Paper
For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI
Counterfactual explanations (CFEs) are a popular approach in explainable artificial intelligence (xAI), highlighting changes to input data necessary for altering a model’s output. A CFE can either describe a s...
-
Chapter and Conference Paper
Spatial Graph Convolution Neural Networks for Water Distribution Systems
We investigate the task of missing value estimation in graphs as given by water distribution systems (WDS) based on sparse signals as a representative machine learning challenge in the domain of critical infra...
-
Chapter and Conference Paper
One-Class Intrusion Detection with Dynamic Graphs
With the growing digitalization all over the globe, the relevance of network security becomes increasingly important. Machine learning-based intrusion detection constitutes a promising approach for improving s...
-
Chapter and Conference Paper
iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios
Post-hoc explanation techniques such as the well-established partial dependence plot (PDP), which investigates feature dependencies, are used in explainable artificial intelligence (XAI) to understand black-bo...
-
Article
Open AccessAgnostic Explanation of Model Change based on Feature Importance
Explainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from rea...
-
Chapter and Conference Paper
Reject Options for Incremental Regression Scenarios
Machine learning with a reject option is the empowerment of an algorithm to abstain from prediction when the outcome is likely to be inaccurate. Although the topic has been investigated in the literature alrea...
-
Chapter and Conference Paper
Intelligent Learning Rate Distribution to Reduce Catastrophic Forgetting in Transformers
Pretraining language models on large text corpora is a common practice in natural language processing. Fine-tuning of these models is then performed to achieve the best results on a variety of tasks. In this p...
-
Chapter and Conference Paper
Explainable Artificial Intelligence for Improved Modeling of Processes
In modern business processes, the amount of data collected has increased substantially in recent years. Because this data can potentially yield valuable insights, automated knowledge extraction based on proces...