1 Introduction

Human behavior includes physical, mental, and social actions that emit a wide range of biosignals which can be captured by a variety of sensors. The processing and interpretation of such biosignals provides an inside perspective on human physical and mental activities, complementing traditional approaches of observing human behavior or collecting explicit input in form of subjective data (e.g. through surveys). Biosignals represent a promising approach to better understand user needs, apply this information for rapid system adaptation, and provide users with instant and transparent feedback on the system’s understanding of their needs. As great strides have been made in integrating sensor technologies into ubiquitous devices and in machine learning methods for processing and learning from data, we argue that the time has come to harness the full spectrum of biosignals to understand user needs and to adapt systems accordingly.

Adaptive systems are studied in many research communities. Many adaptive systems rely on data collected through observing human behavior or through explicit human input commands via human–machine interfaces. For example, navigation systems are based on GPS sensor data enabling location synchronization, or recommender systems leverage information about buying behavior and common interests of their users. These systems have advanced significantly, but they are all based on the availability of their users’ explicit input or their observable behavior. Complementary to this, biosignals offer the potential to adapt systems to the user’s biosignals, which in the context of a concrete application can be interpreted as implicit signals of the user needs. Thus, system adaptation to user needs can be performed even if the user does not provide explicit input or transmits observable behavioral data.

This position paper introduces Biosignal-Adaptive Systems (BAS) which continuously record biosignals of users to learn their current needs, and adapt system components, models, and system output to it with the goal to improve performance during the interaction with the user. The paper first describes a taxonomy of biosignals suitable to identify user needs, introduces a key conceptualization of BAS, and finally illustrates examples of BAS ranging from biosignal-adaptive interaction systems that improve assistance by identifying mental states such as attention and engagement to biosignal-adaptive enterprise systems that target increasing human worker productivity and well-being. Finally, we articulate future research challenges for the successful deployment of BAS to business and society.

2 Key concepts

2.1 Biosignals

Biosignals are autonomous signals produced by the living organism, energetically measurable in physical quantities using sensors [1]. Biosignals are based on chemical and physical actions of the human body and serve to control, regulate and transfer information in the human organism. Thereby enabling orderly interaction in the overall human system. Depending on their origin, biosignals are measured in different quantities, i.e. in the form of electrical quantities (potential, current, resistance), mechanical quantities (force, pressure, movement), acoustic quantities (speech, non-verbal articulations and body noises), thermal quantities (temperature, amount of heat), and chemical quantities (concentration, pH). Figure 1 depicts our biosignals taxonomy. It indicates that a biosignal is the result of a human activity captured by a particular sensor. Human behavior may encompass several activities, e.g. attention could show from brain activity, eye gaze and facial expression. In such a case, several sensors are applied to simultaneously capture multiple human activities. We refer to the result as multimodal biosignals.

Fig. 1
figure 1

Taxonomy of Biosignals resulting from Human Activities captured by Sensors (see [1])

Today, the daily life of many people is interwoven with the use of digital devices. Modern devices are already well equipped with a large variety of sensors, many of them are always on and always connected to the internet. Modern smartphones for example, hold (1) multiple microphones to capture acoustic biosignals in form of speech and non-verbal articulation (laughing, breathing, snoring) from the user and bystanders; (2) multiple cameras to capture optical biosignals, such as the user’s face, facial expressions, and eye gaze; (3) inertial sensors to measure kinematic biosignals, such as acceleration and angular velocity of motion in 3D; (4) infrared sensors to measure body heat and to control other devices; and (5) laser sensors to measure distance, just to name a few. Furthermore, body-attached electrodes are used to measure electrical biosignals, some are known from medical examinations. Common examples of electrical biosignals are heart activity measured by electrocorticography (ECG), muscle activity recorded by electromyography (EMG), brain activity captured by electroencephalography (EEG), eye activity measured by electrooculography (EOG) and skin conductance measured by electrodermal activity (EDA). As a result of the widely integrated and available sensors, most BAS focus on acoustic, kinetic, optical, and electrical biosignals, while thermal and chemical biosignals are yet to be fully explored. Body temperature has been successfully used at airports since the SARS outbreak 2003 for fever screening and detection systems based on infrared cameras (optical biosignal) [2], but the differentiated adaptation of systems to thermal biosignals is less promising. Major reasons are that body temperature is subject to fluctuations due to changing environmental conditions and physical activity, and that the body temperature range offers little differentiation potential for BAS.

In addition to already deployed sensors that accompany and surround us in everyday life, a new generation of sensors is in the starting block that can be woven into clothing, printed on the skin, injected under the skin or implanted in the body.

Chemical biosignals are typically measured by taking blood samples, which prohibits their continuous application in adaptive systems. However, lately non-invasive sensors are proposed, such as a glucose sensors which determine the blood sugar level based on the glucose concentration in the sweat at the surface of the skin [3]. Wearable sensor systems became available that allow for non-invasive 24/7 measurement, which is extremely useful for the growing number of diabetes mellitus patients. Once these sensors get connected to an insulin pump, it becomes a BAS based on chemical biosignals. Applications reach far beyond medical purposes, for example, lifestyle apps are possible that offer their users guidance in food choice decisions depending on their blood sugar level. Other chemical quantities like pH are also conceivable as soon as non-invasive sensors become available.

This new level of sensor integration holds both enormous potential for BAS but also challenges and risks with regard to privacy, data protection, transparency, and user empowerment. These issues will be discussed in the final section of this paper.

2.2 Adaptive systems

In biology, adaptation is described as the process of change by which an organism or species becomes better suited to its environment. The ability to adapt is crucial for the survival of living beings. The concept of adaptation has also been transferred and successfully applied to information technologies (IT). The design of adaptive systems is investigated in many different research communities in computer science, e.g. for hypertext/hypermedia systems [4], for speech communication systems to adapt to speakers [5], domains [6], and languages [7], human-computer interaction (HCI) [8,9,10], robotics [11], ubiquitous computing [12], software engineering [13, 14], systems for clinical support [15], for nursing care [16], and for everyday assistance at home [17], to name a few. In particular, the field of artificial intelligence (AI) has coined the notion of intelligent agents that perceive their environment through sensors and act upon that environment through actuators [18]. Intelligent agents are characterized by the ability to learn and act autonomously. Thus, we argue that an intelligent agent represents one form of an adaptive system. In the past, intelligent agents were mainly based on predefined prior knowledge (e.g. in the form of lookup tables or condition-action rules). With the availability of growing amounts of contextual data and recent advances in machine learning, it has become possible to design intelligent agents that are able to gain experience over time and extend prior knowledge based on collecting and processing context information perceived through sensors. Today, contemporary adaptive system leverage context to automatically perform system-driven adaptations with varying degrees of intelligence. They range from the simple application of predefined knowledge (e.g. in the form of rules) to the ability to learn continuously in order to expand the predefined knowledge. Context in that sense is any information that can be used to characterize the situation of an entity, where an entity can be a person, place, or object that is considered relevant to the interaction between a user and an adaptive system [19]. As mentioned above, biosignals capture user context by providing an inside perspective on human physical, mental, and social activities.

2.3 Building blocks of biosignal-adaptive systems

Biosignal-adaptive systems (BAS) are able to react to changing user needs in varying tasks and environments. Conceptually, BAS build on control theory [20]. User needs and changes thereof are predicted by continuously measuring, processing, and interpreting the biosignals emitted by the user. The predicted result is provided to the technical system, which is equipped with the ability to adapt its behavior to the user needs, for example through audible or visual output to the graphical user interface, or through changes in reaction time or through changes in solution strategies. Similar models have been proposed in the field of self-adaptive software, e.g. the MAPE-K model [14], the observer and controller model [21], or the sense-plan-act model [13].

In contrast to these models, our proposed BAS conceptualization combines several features in an innovative way, which discriminates it from existing solutions: Firstly, our BAS closes the human–machine interaction loop by giving the user continuous and timely feedback about its interpretation of the user needs based on instant processing of multimodal biosignals. According to our concept of transparent BAS, the user receives this feedback in the form of the interpreted needs, the system adjustments made and the resulting biosignal response. Thereby secondly, BAS focus on the human as the system under control and observation, where biosignals are transformed into a control input for providing real-time adaptation following a continuous loop approach. This way the user who implicitly produces these biosignals can change the behaviour and outcome of the technical system without explicit command and control. Thirdly, since BAS performs in real-time without perceivable latency between the biosignals input and the system’s feedback, the user and the technical system form something like an oscillating circuit: not only does the system adapt to the user, but the user also tunes to the system by moderating their biosignals. This opens avenues for radically new methods and applications of co-adapting interfaces. First examples come from the field of biosignal-based spoken communication [22], in which we use speech-related biosignals beyond acoustics, stemming from articulatory muscle activity, neural pathways, and the brain itself, to convert them directly into audible speech that is played to users with low latency, such that they can listen to themselves thinking [23].

Building on existing work in the fields of physiological computing with the concept of the biocybernetic loop [24] and human-computer interaction [9, 10] we argue that such a BAS should consist of four interconnected building blocks as depicted in Fig. 2.

Fig. 2
figure 2

Building blocks of a Biosignal-Adaptive System (BAS), clockwise: Human user emits biosignals, that are captured by a Biosignal recording device equipped with sensors, then processed and classified into user states or traits, which feed the adaptation to tune the system behavior to the user needs. Human user reacts to the adapted system by emitting biosignals, thereby closing the human-system interaction loop

The BAS-process gets initiated (see Fig. 2 clockwise at block labeled Human), when a user engages in physical, mental or social activities with or through a technical system. Consequently, s/he emits multi-dimensional biosignals, which are measured by a sensor-equipped Biosignals Recording device. In case multimodal biosignals are captured, the signals will have to be time-synchronized between devices prior to signal transmission. The subsequent Processing and Classification component processes the received signal by performing artifact removal, normalization, and extracting features relevant for the classification or recognition task. If multimodal biosignals were measured, fusion strategies are applied to best complement information. We refer to classification if a single sample is classified into one-out-of-n classes, while recognition finds the most likely class label sequence for time series signals. Here, the system interprets for example user traits (e.g. identity, age, personality) and user states (e.g. emotion, engagement, attention, work load). The result is transmitted continuously to the Adaptation component which decides if the technical system should adapt at the given moment in time. If so, the adaptation process is performed according to the implemented adaptation strategy. Such a strategy could consist of a simple set of if-then-else rules or any complex behavior modeling. Adaptation can either be done only once or after batches of signals, or continuously to adapt dynamically to changes. Furthermore, adaptation might apply supervised learning strategies which require some grounding by supervision or interaction with users. Alternatively, an unsupervised learning mode is applied, for which the predicted class labels are treated as ground truth. The resulting adapted technical system provides an output via the graphical user interface to the user, who reacts by generating new biosignals, thereby closing the human-system interaction loop. Since the biosignal generation is an implicit process, the user influences the system behavior without having to perform explicit directed actions or system inputs.

3 Examples of biosignal-adaptive systems

We present several examples of BAS to illustrate which biosignals are useful for adapting systems to user needs. The examples showcase which biosignals are captured, processed and interpreted and how the system components and thus the system behavior is adapted and feedback is given to the user in real-time. The first class of systems concerns Biosignal-Adaptive Interaction Systems for Assistance and Activation. One application describes SmartHelm, an attention-sensitive smart helmet for driver assistance. The other application features I-CARE, an activation system for people with dementia that selects activation content based on the engagement of its users. The second class of systems demonstrates the potential in the context of designing adaptive workplaces under consideration of productivity and well-being of human workers. Biosignal-Adaptive enterprise systems process biosignals of human workers and adapt the workplace accordingly. Two applications of such systems focusing on the user states of attention and flow demonstrate the potentials of processing biosignals and provide corresponding workplace adaptations in real-time.

3.1 Biosignal-adaptive interaction systems for assistance and activation

Two applications, one for user assistance (SmartHelm) and one for user activation (I-CARE) are presented which measure multiple modalities (e.g. speech, facial expressions, eye gaze, brain activity) and fuse the resulting biosignals to reliably discriminate user states such as attention, distraction, work load, stress, and engagement. The prototypical systems feature low-latency signal processing and fast machine learning methods to provide immediate feedback and adapted system response to its users. The applications were evaluated and validated in field studies.

Fig. 3
figure 3

Through the eyes of SmartHelm (top left and bottom panel), automatically annotated objects (colored bounding boxes), time-synchronized brain activity and gaze tracking of the biker (bottom right), map with GPS position and biker’s attention profile (bottom left). Colors in the map indicate the current position (blue) and level of distractions while driving, i.e. no (green), medium (yellow), and high (red) level, see [25] for more details; \(\copyright\) 2022 SmartHelm

3.1.1 Attention-aware driver assistance: smartHelm

SmartHelmFootnote 1 is an attention-sensitive smart helmet that integrates none-invasive brain and eye activity detection with hands-free augmented reality components in a speech-enabled outdoor assistance system [26]. It is designed for cargo bikers, who are closing the last mile in city logistics by delivering goods from a transportation hub to the final destination. Since cargo bikers typically navigate busy city roads to deliver goods under time pressure, their job requires full attention and constant adaptation to a wide variety of situations and distractions. They can therefore use any technical support that keeps them on track, reduces their stress level, and increases their safety on the road.

SmartHelm continuously tracks the activity of the eyes and brain of the bikers to interpret attention and distraction in the driving context. The interpreted user states are then applied to adapt eye- and hands-free assistance services such as navigation, task planning and communication to the biker’s needs, e.g. relevant task information is presented in a context-sensitive and least disruptive manner [26]. Figure 3 shows the helmet prototype (top right) and look-through (top left). The bottom panel displays derived information to the expert during development, i.e. center: annotations of objects in the path, bottom right: the EEG and eye-tracking biosignals of the biker, both used to train the AI system, bottom left: a heat-map with the biker’s GPS-trace along with the automatically identified attention level. The critical task of SmartHelm is to find the sweet spot to provide useful and timely assistance without overloading the cyclist.

3.1.2 Engagement-aware activation systems: I-Care

Fig. 4
figure 4

I-CARE: Ad-hoc Activation group (left) and individual Tandem session (right) (Source: \(\copyright\) AWO Karlsruhe gGmbH, see also [27])

I-CARE is a hand-held activation system that allows professional and informal caregivers to cognitively and socially activate people with dementia in joint activation sessions without special training or expertise. It is suitable for activation in ad-hoc group sessions (see left-hand side of Fig. 4) and in individual tandem sessions (see right-hand side of Fig. 4). I-CARE consists of an easy-to-use tablet application that presents activation content and a server-based backend system that securely manages the contents and events of activation sessions.

After requesting permission, I-CARE uses the microphone and camera of the tablet to record acoustic and optical biosignals. It also stores keyboard interactions and integrates an E4 wristband to measure electrical biosignals, such as ECG and EDA. I-CARE uses these multimodal biosignals of explicit and implicit user interaction to estimate which content is successful in activating individual users. Over the course of use, I-CARE’s recommendation system learns about the individual needs and resources of its users and automatically personalizes the activation content. In particular, it identifies the engagement of individual users to presenting them with content that they find interesting – thereby kee** users on track to increase activation time and intensity, which correlates with outcome. In addition, information about past sessions can be retrieved such that the activation items seamlessly build on previous sessions while eligible stakeholders are informed about the current state of care and daily form of their protegees [27].

3.2 Biosignal-adaptive enterprise systems

Biosignal-adaptive enterprise systems define a class of information systems in organizations where adaptive digital workplaces for human workers are provided based on monitoring, analyzing, and responding to biosignals in real-time. In the following we present two biosignal-adaptive enterprise systems for the digital workplace with a specific focus on the two psychological user states of i) attention and ii) flow. Specifically, in i) we capture eye gaze using eye-tracking technology to recognize visual attention of human workers when working with information dashboards for decision-making. In ii) we monitor heart activity with surface electrodes to capture electrical biosignals (specifically ECG signals) and on this basis recognize flow states. Building on the discovered flow states, we provide flow-adaptive notifications at the digital workplace for human workers.

Fig. 5
figure 5

Attentive Information Dashboards: On the basis of an eye-tracking device, we collect and analyze visual attention during information processing and provide feedback

3.2.1 Attentive information dashboards

Information dashboards are a critical capability in contemporary business intelligence and analytics systems supporting decision-making in organizations. Despite their strong potential to support better decision-making, the massive amount of information they provide challenges users performing data exploration tasks.

Accordingly, information dashboard users face difficulties in managing their limited attentional resources when processing the presented information on dashboards. Attentive information dashboards leverage eye-tracking in real-time and provide individualized visual attention feedback (VAF) to human workers. Specifically, we measure fixation duration and the number of fixations on predefined areas of interest of the dashboard. The underlying idea is that providing quantified information about human worker’s visual attention will improve attentional resource allocation as well as resource management of human workers.

Figure 5 depicts the basic idea of attentive information dashboards at the workplace. Specifically, we use the Tobii EyeTracker 4C to collect eye gaze and present attention feedback as an overlay to the information dashboard. Our research has demonstrated the positive effects of attentive information dashboards on user’s attentional resource allocation and resource management [28]. Attention-aware adaptive systems at the workplace are not only relevant for processing huge amounts of information on information dashboards. In a related research project, we have shown their potential for improving attention management in virtual team meetings [29].

3.2.2 Flow-adaptive notification management systems

Flow refers to the holistic sensation that people feel when they act with total involvement. Promoting flow in the context of work is desirable, because it leads to increased workers’ well-being and performance [30].

However, with the increasing number of interruptions at the workplace, it is becoming more difficult to achieve the desirable flow state. Therefore, as part of the research project Kern funded by the German Ministry of Work and Social Affairs (BMAS)Footnote 2 we first targeted to discover flow states automatically in real-time using biosignals and supervised machine learning. Subsequently, we designed different form of adaptations which should intelligently protect human workers from notifications when being in flow states.

Figure 6 depicts the workplace as well as two participants of the field study carried out as part of the Kern project. We designed and deployed a flow-adaptive notification management system using the Polar H10 device for collecting ECG signals as depicted on the right bottom of Fig. 6. The device was connected via Bluetooth to the corresponding computer. We provided a notification management plugin to the operating system that allowed participants to manually activate/deactivate the connection. Specifically, we leverage cardiac features collected in the form of ECG signals to train a flow state classifier [31]. We train this classifier using labeled data collected through an experience sampling method (ESM) procedure in a first step. In a second step, the flow-adaptive notification system leverages this classifier. Specifically, we implement the flow-adaptive notification system as a plugin for the collaboration tool Slack. More detailed information about the study and the evaluation results are provided in [32].

Fig. 6
figure 6

Flow-Adaptive Notification Management: We developed and deployed this system as part of field study at Workwise GmbH in Karlsruhe. Over a periode of 2 weeks the participants shared biosignals based on the Polar H10 device. This data was processed in real-time to classify flow states and adaptation the notification management of the collaboration tool Slack accordingly

4 Future research challenges

In this position paper we have introduced the concept of BAS and its major building blocks. Furthermore, we have described several BAS applications from our own research. In the following, we describe the lessons learned from implementing BAS in terms of recording and annotating data, from using it to model human activities for the purpose of both analysis and synthesis, and from adapting and evaluating systems, taking into account the work of others. Subsequently, we articulate two major challenges for the successful delivery of BAS to business and society.

4.1 Implementation challenges

The implementation of BAS comes with numerous challenges. In the following we focus on four major areas that we were also confronted with: (1) data collection and annotation, (2) models of BAS-relevant human activities, (3) BAS design space for adaptation strategies, and (4) BAS evaluation.

First, even as sensors quickly improve, collecting high-quality sensor data in the field remains a major challenge. Typical challenges are (i) ethical considerations, which we have dedicated a separate subsection to (see 4.3 below), (ii) artifacts ranging from technical (e.g. sensors, connectors, network communication) and environmental (e.g. signal interference, ambient noise) to biological (e.g. sweat, eye-blinks) factors, which result in noisy data [33], (iii) the need of sensor calibration (e.g. eye trackers), and (iv) the need of baseline data (e.g. data for resting state in ECG, and for data normalization). Furthermore, annotations of recorded data are an integral part of machine learning and AI applications. Data annotation is one of the most time-consuming and labor intensive part, it requires talented and motivated annotators, clear annotation guidelines including semantic methodologies, ontologies, and - particularly for synchronous recordings of multimodal biosignals, it relies on suitable reliable tools [34]. If self-reported data for example about cognitive user states needs to be collected, e.g., by using the experience sampling method [35], the corresponding BAS studies become very lengthy and exhausting for the participants.

Second, the development of BAS-relevant human activity models encompass a range of intricate tasks, such as the extraction and selection of good features [36], the choice of appropriate ML [37] or Deep Learning [38] approaches together with the definition of suitable error functions, strategies for parameter optimization, and proper evaluation metrics, just to name a few. Development of models also includes considerations of (i) robustness and generalizability with respect to data variability within and across users, tasks, and context [39], (ii) transferability and scalability of models, i.e. a model can handle unseen user states and traits even when only few or zero data samples are available [40] and a model can cope with any amount of data in a cost-effective way, and (iii) accountability and bias-awareness [41].

Third, we were challenged by making informed decisions regarding the BAS design space for adaptation strategies. Existing literature has shown that the design space for adaptive systems is huge [42]. With regards to possible adaptations it ranges from the modification of content, the interaction, the task scheduling and allocation. Furthermore, different types of triggers, e.g. spatio-temporal, environment, task, human or system states should be considered. It is impossible to systematically “test” all possible design configurations.

Fourth, evaluation of adaptive systems in general is known to be a non-trivial task. Therefore, existing literature has proposed a modular approach [43] that includes technical performance as well as empirical evaluation building blocks. The specific characteristics of BAS as a specific class of adaptive systems make their evaluation even more challenging. For example, for implementing the flow-adaptive notification management system we first had to collect data, build a flow classifier and evaluate its technical performance. The flow-adaptive notification system made use of the classifier as a technical building block. From an evaluation point of view, we evaluated the entire system with real users in the field. It is challenging to clearly separate the dependencies between the perception of the quality of the flow classifier and the evaluation of the overall system from the user’s point of view. Furthermore, the effects of the BAS on its users heavily depend on the individuals and their context. For example, some users already proactively managed their notification setups with regards to en-/disablement. They have therefore not benefited from the BAS.

4.2 Advancing AI methods for BAS

AI methods, specifically supervised machine learning techniques have greatly advanced. However, AI methods are not yet ready to fulfill all requirements for building BAS. Thus, in the future AI methods and tools need to be advanced to continuously process and interpret biosignals, to iteratively train and update models, to dynamically adapt to changing tasks and environments, to learn which information to keep and what to forget, and to discover how to transfer acquired knowledge to unseen domains, unknown users, or new architectures and platforms. We believe that BAS, as a challenging area of interdisciplinary research at the intersection of AI/ML, sensors, and adaptive systems design, provide both the push and the pull to further develop the respective fields.

4.3 Ethical considerations of BAS

Major initiatives have been initiated in recent years focusing on ethical considerations with regards to AI-based systems such as BAS. One example is the “Ethical Aligned Design (EAD)” initiative, in which several hundreds of professionals including engineers, scientists, ethicists, sociologists, and economists from six continents have formulated societal and policy guidelines in order for intelligent systems to remain human-centric, serving humanity’s values and ethical principles [44]. They should prioritize and have as their goal the explicit honoring of our inalienable fundamental rights and dignity as well as the increase of human flourishing and environmental sustainability. It begins with conscious contemplation, where ethical considerations help us define how we wish to live. EAD defines 8 general principles to be followed by AI system creators, namely human rights, well-being, data agency, effectiveness, transparency, accountability, awareness, and competence. It also provides clear guidelines, methods, and metrics on how to bring these general principles to practice ([45]).

During the design, implementation, and use of the BAS described above, the authors and their teams adhere to these principles and guidelines. We strive to sensitize our students to the ethical considerations related to AI systems in general and BAS in particular by discussing them in teaching and training as well as enforcing their reflection in an early stage of each research project. However, we believe that future research is required in better understanding how to break-down the generic principles to the specific context of BAS. Furthermore, a deeper understanding of design trade-offs considering ethical principles is required. For example, BAS may positively impact well-being of individuals (e.g. increase flow), but at the same time come with new challenges with regards to data security and privacy.

5 Conclusion

In this position statement paper we presented our perspective on BAS, an increasingly important class of AI systems that are able to automatically adapt to user needs by continuously interpreting their biosignals. We described selected key concepts and building blocks of BAS as well as showcased selected BAS examples. In order to fully leverage the potential of BAS future research is required. Specifically, we highlight advancing AI methods for BAS and contextualizing ethical principles for BAS as well as achieving a deeper understanding of design trade-offs.