Skip to main content

previous disabled Page of 8
and
  1. No Access

    Chapter and Conference Paper

    A Remark on Concept Drift for Dependent Data

    Concept drift, i.e., the change of the data generating distribution, can render machine learning models inaccurate. Several works address the phenomenon of concept drift in the streaming context usually assumi...

    Fabian Hinder, Valerie Vaquet, Barbara Hammer in Advances in Intelligent Data Analysis XXII (2024)

  2. Article

    Open Access

    Incremental permutation feature importance (iPFI): towards online explanations on data streams

    Explainable artificial intelligence has mainly focused on static learning scenarios so far. We are interested in dynamic scenarios where data is sampled progressively, and learning is done in an incremental ra...

    Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier in Machine Learning (2023)

  3. Article

    Open Access

    Contrasting Explanations for Understanding and Regularizing Model Adaptations

    Many of today’s decision making systems deployed in the real world are not static—they are changing and adapting over time, a phenomenon known as model adaptation takes place. Because of their wide reaching in...

    André Artelt, Fabian Hinder, Valerie Vaquet, Robert Feldhans in Neural Processing Letters (2023)

  4. No Access

    Article

    Metric Learning with Self-Adjusting Memory for Explaining Feature Drift

    Lifelong and incremental learning constitute key algorithms when dealing with streaming data in possibly non-stationary environments. Because of their capability of adapting to varying model complexity, non-pa...

    Johannes Kummert, Alexander Schulz, Barbara Hammer in SN Computer Science (2023)

  5. Article

    Open Access

    Novel transfer learning schemes based on Siamese networks and synthetic data

    Transfer learning schemes based on deep networks which have been trained on huge image corpora offer state-of-the-art technologies in computer vision. Here, supervised and semi-supervised approaches constitute...

    Philip Kenneweg, Dominik Stallmann, Barbara Hammer in Neural Computing and Applications (2023)

  6. Article

    Open Access

    Modularity in Nervous Systems—a Key to Efficient Adaptivity for Deep Reinforcement Learning

    Modularity as observed in biological systems has proven valuable for guiding classical motor theories towards good answers about action selection and execution. New challenges arise when we turn to learning: T...

    Malte Schilling, Barbara Hammer, Frank W. Ohl, Helge J. Ritter in Cognitive Computation (2023)

  7. No Access

    Chapter and Conference Paper

    Adversarial Attacks on Leakage Detectors in Water Distribution Networks

    Many Machine Learning models are vulnerable to adversarial attacks: One can specifically design inputs that cause the model to make a mistake. Our study focuses on adversarials in the security-critical domain ...

    Paul Stahlhofen, André Artelt, Luca Hermes in Advances in Computational Intelligence (2023)

  8. No Access

    Chapter and Conference Paper

    Measuring Fairness with Biased Data: A Case Study on the Effects of Unsupervised Data in Fairness Evaluation

    Evaluating fairness in language models has become an important topic, including different types of measurements for specific models, but also fundamental questions such as the impact of pre-training biases in ...

    Sarah Schröder, Alexander Schulz, Ivan Tarakanov in Advances in Computational Intelligence (2023)

  9. No Access

    Chapter and Conference Paper

    Extending Drift Detection Methods to Identify When Exactly the Change Happened

    Data changing, or drifting, over time is a major problem when using classical machine learning on data streams. One approach to deal with this is to detect changes and react accordingly, for example by retrain...

    Markus Vieth, Alexander Schulz, Barbara Hammer in Advances in Computational Intelligence (2023)

  10. No Access

    Chapter and Conference Paper

    On the Change of Decision Boundary and Loss in Learning with Concept Drift

    Concept drift, i.e., the change of the data generating distribution, can render machine learning models inaccurate. Many technologies for learning with drift rely on the interleaved test-train error (ITTE) as ...

    Fabian Hinder, Valerie Vaquet in Advances in Intelligent Data Analysis XXI (2023)

  11. No Access

    Chapter and Conference Paper

    Fairness-Enhancing Ensemble Classification in Water Distribution Networks

    As relevant examples such as the future criminal detection software [1] show, fairness of AI-based and social domain affecting decision support tools constitutes an important area of research. In this contributio...

    Janine Strotherm, Barbara Hammer in Advances in Computational Intelligence (2023)

  12. No Access

    Chapter and Conference Paper

    iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams

    Existing methods for explainable artificial intelligence (XAI), including popular feature importance measures such as SAGE, are mostly restricted to the batch learning scenario. However, machine learning is of...

    Maximilian Muschalik, Fabian Fumagalli in Machine Learning and Knowledge Discovery i… (2023)

  13. Chapter and Conference Paper

    For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI

    Counterfactual explanations (CFEs) are a popular approach in explainable artificial intelligence (xAI), highlighting changes to input data necessary for altering a model’s output. A CFE can either describe a s...

    Ulrike Kuhl, André Artelt, Barbara Hammer in Explainable Artificial Intelligence (2023)

  14. No Access

    Chapter and Conference Paper

    Spatial Graph Convolution Neural Networks for Water Distribution Systems

    We investigate the task of missing value estimation in graphs as given by water distribution systems (WDS) based on sparse signals as a representative machine learning challenge in the domain of critical infra...

    Inaam Ashraf, Luca Hermes, André Artelt in Advances in Intelligent Data Analysis XXI (2023)

  15. No Access

    Chapter and Conference Paper

    One-Class Intrusion Detection with Dynamic Graphs

    With the growing digitalization all over the globe, the relevance of network security becomes increasingly important. Machine learning-based intrusion detection constitutes a promising approach for improving s...

    Aleksei Liuliakov, Alexander Schulz in Artificial Neural Networks and Machine Lea… (2023)

  16. No Access

    Chapter and Conference Paper

    iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios

    Post-hoc explanation techniques such as the well-established partial dependence plot (PDP), which investigates feature dependencies, are used in explainable artificial intelligence (XAI) to understand black-bo...

    Maximilian Muschalik, Fabian Fumagalli in Explainable Artificial Intelligence (2023)

  17. Article

    Open Access

    Agnostic Explanation of Model Change based on Feature Importance

    Explainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from rea...

    Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer in KI - Künstliche Intelligenz (2022)

  18. No Access

    Chapter and Conference Paper

    Reject Options for Incremental Regression Scenarios

    Machine learning with a reject option is the empowerment of an algorithm to abstain from prediction when the outcome is likely to be inaccurate. Although the topic has been investigated in the literature alrea...

    Jonathan Jakob, Martina Hasenjäger in Artificial Neural Networks and Machine Lea… (2022)

  19. No Access

    Chapter and Conference Paper

    Intelligent Learning Rate Distribution to Reduce Catastrophic Forgetting in Transformers

    Pretraining language models on large text corpora is a common practice in natural language processing. Fine-tuning of these models is then performed to achieve the best results on a variety of tasks. In this p...

    Philip Kenneweg, Alexander Schulz in Intelligent Data Engineering and Automated… (2022)

  20. No Access

    Chapter and Conference Paper

    Explainable Artificial Intelligence for Improved Modeling of Processes

    In modern business processes, the amount of data collected has increased substantially in recent years. Because this data can potentially yield valuable insights, automated knowledge extraction based on proces...

    Riza Velioglu, Jan Philip Göpfert in Intelligent Data Engineering and Automated… (2022)

previous disabled Page of 8