Keywords

1 Introduction

The aims of automated driving systems are to reduce traffic accidents and improve road safety, because some research implies that most of accidents are attributed to human error (Dingus, et al. 2006). The systems function as supportive automation do complete partial driving tasks. Therefore, reducing driver’s workload allows the driver to be out-of-the-loop to some degree. However, the present partial automated systems expect the driver to stay in the loop to monitor the whole driving process and to control the driving task at any time (Lu et al. 2016). So, when drivers out-of-the-loop suddenly need to take over control from an automated driving system, accidents are more likely to occur if the time of take-over required is more than available (Jamson, et al. 2013; Merat, et al. 2014; Zeeb, Buchner and Schrauf 2015). In order to understand how human factors influence the time of driver take-over, this paper proposes a framework that incorporates human factors issues into transitions between the driver and automated driving systems. Through observations noted from the review, we attempt to explore the primary human factors influencing the timing and effectiveness of driver take-over.

2 Related Literature

2.1 Automated Driving Systems

According to the Society of Automotive Engineers (SAE), six levels of driving automation from “no automation (level 0)” to “full automation (level 5)” are identified (SAE 2014). Since most stakeholders have adopted the SAE standard, the remaining portion of this article adopts the six-level definition of automated driving systems (ADS) identified by SAE. The aforementioned classification of automation levels primarily describes how the dynamic driving task is distributed between drivers and automation systems. At Level 0 (no automation), the driving task is performed entirely by a driver, and at Level 5 (full automation), the driving task is conducted completely by automation systems. As for Level 1, in which the driver and the system perform cooperatively the dynamic driving tasks, the system is a driver assistance system of either steering or acceleration/deceleration. Some advanced driver assistance systems (ADAS) that currently available on many production vehicles belong to Leve 1. Two commonly seen examples are adaptive cruise control (ACC) and lane-kee** assist (LKA). In Level 2 vehicles, the automation driving systems relieve the driver from both longitudinal and lateral control tasks. An example of Level 2 automation is a system that combines lane-kee** and ACC operations. At Level-2, however, the driver still needs to monitor the surrounding environment, receive system feedback, and is responsible for the overall operation of the vehicle. Notwithstanding the potential for and the reality of driver distraction and inattention, Level 2 assumes that the human driver will continue to actively monitor the driving environment. In Level 3 automation, the automation driving systems relieve the driver from continuous supervisory requirement to some degree and thus change the driver’s role in a significant manner. For the human drivers in Level 3 automation, they are passive supervisors because their attention may be directed toward secondary tasks. However, once the automation systems initiate a request for the driver to intervene and take over the dynamic control tasks, human drivers still need to respond appropriately (Smith and Svensson 2015). Figure 1 shows the driver-ADS-environment interaction for L2 and L3 vehicles while driving (Marinik, et al. 2014). For a vehicle with Level 2 or 3 automation, the dynamic driving tasks and monitoring of the surrounding environment under normal or abnormal situations are cooperatively carried out by the human drivers and automation systems.

Fig. 1.
figure 1

Automated Operator-Vehicle Interaction System (Source: Marinik, Bishop, Fitchett, Morgan, Trimble, and Blanco 2014. Modified by authors)

In these partially automated systems, the driver’s task changes from actively operating to passively supervising the system. Obviously, the difficulty in driver’s interaction with partial automation, a vehicle of Level 2 or 3, is that it assumes drivers are always available, though it is more likely that drivers will shift their attention to non-driving related tasks. Consequently, sometimes the drivers are not available to resume control of the vehicle when requested. These safety concerns are critical. In this paper, we conduct a review of literature that are focused on research about driver take-over from automation driving systems of Level 2 and Level 3 vehicles.

2.2 Human Factors Issues During Automated Driving

While many automakers and technology providers are intensively develo** automated driving vehicles, mcuh research has also looked into a number of human factors issues that influence the safety and effectiveness of human intervention in ADS (Cunningham and Regan 2015; Jones 2013; Martens and Beukel 2013; Saffarian, et al.2012). Some primary considerations are highlighted below.

  • Mental workload and distraction

Automation purports to reduce driver stress and workload (Vahidi and Eskandarian 2003), however, reductions below a certain level of mental workload might have a negative effect on driving performance. In routine driving, automated driving often reduces mental workload by relieving parts of driving tasks (Ma and Kaber 2005) and such low workload leads to boredom. Matthews and Desmond (2002) found that underload caused greater damage to drivers’ performance than overload.

When driving related workload is low, instead of monitoring and supervising the autonomous driving system, drivers may seek to engage in other activities such as entertainment (Carsten et al. 2012; Merat et al. 2012). Therefore, the number and duration of off-road glances and secondary tasks involvement increase under some forms of automated driving (Jamson, et al. 2011, Cho, Nam, Lee 2006). Another study found that participants reduced horizontal gaze dispersion and side mirror checks (He, et al. 2011) under xxx situation compared to xxx situation. As this underload occurs, delayed reactionary performance can occur (Merat and Jamson 2009, Young and Stanton 2001). The tests on the secondary tasks show that performance on these tasks are improved under automated driving, which demonstrates the additional attention allocated to them (Rudin-Brown, Parker and Malisia 2003). The results indicate that the more driving automation involved, the more drivers are willing to rely on automation to permit them to perform non-driving related tasks. Therefore, these research studies have illustrated that drivers may be more vulnerable to distractions during periods of driving automation, which lead to a safety issue when suddenly regaining control of the vehicle is required (Merat et al. 2012).

  • Situation Awareness

Endsley (1995) considers that situation awareness (SA) is “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”. The three levels of SA are defined by perception, comprehension, and projection. Higher levels of SA depend on the success of lower levels (Endsley 1995). The first level of SA is to perceive the status, attributes, and dynamics of relevant elements in the environment. Based on a synthesis of disjointed Level 1 elements, the second level of SA is to comprehend the situation. Achieved through knowledge of the status and dynamics of the elements and comprehension of the situation (both Level 1 and Level 2), the third and highest level of SA is formed to project the future actions of the elements in the environment (Endsley 1995; Endsley and Kaber 1999). So, SA is considered to provide a basis for decision-making and performance.

SA of drivers reflects the dynamic mental model of the driving environment including current driving conditions; the condition of other vehicles, pedestrians, and objects; traffic lights and so on (Horrey et al. 2006; Endsley 2015). Matthews et al. (2001) outline elements of SA that are relevant to driving. They believe that the elements include spatial awareness, identity awareness, temporal awareness, goal awareness and system awareness. Spatial awareness is the knowledge of the location of all relevant features of the environment. Identity awareness is the knowledge of salient items in the driving environment. Temporal awareness is the knowledge of the changing spatial “picture” over time. Goal awareness is the driver’s intention of navigation to the destination, and the maintenance of speed and direction. System awareness is relevant information on the vehicle within the driving environment. Incorporating navigational knowledge, environment and interaction knowledge, spatial orientation, and various vehicle statuses, the operational, tactical, and strategic levels of driving comprise SA (Matthews, Bryant, Webb and Harbluk 2001; Ma 2005).

Fisher and Strayer (2014) consider that driving is dependent on several cognitive processes, including visual scanning of the driving environment for, predicting and anticipating potential threats, identifying threats and objects in the driving environment, deciding an action and executing appropriate responses (SPIDER). When drivers engage in secondary tasks unrelated to the driving, attention is often diverted from driving and the performance on these SPIDER-related processes is impaired (Regan and Strayer 2014). Consequently, activities diverting attention from the tasks degrade the driver’s situation awareness.

For automated driving, the researchers find that the impacts on a driver’s situation awareness are direct (Endsley 1995). The results indicate that drivers are willing to rely on automation to permit them to perform secondary tasks. Because working memory plays a critical role in the driver’s situation awareness and secondary tasks place demands on working memory, the secondary tasks degrade the driver’s SA (Johannsdottir and Herdman 2010; Heenan et al. 2014). Once emergency such as an unexpected conflict or automation system malfunction happens, the situation requires quick reactions that depend largely on the SA level. However, secondary non-driving-related tasks decrease the SA level and consequently driving performance is decreased (Matthews, Bryant, Webb and Harbluk 2001; Merat, Jamson, Lai and Carsten 2010). In addition, Endsley (1996) thinks that lower level of drivers’ SA in the automated conditions is attributed to more passive in the process of decision making, which the drivers rely on the automated expert system’s recommendations.

Several empirical studies clearly demonstrated that SA is reduced on the aid of automation. For example, drivers utilizing ACC (Adaptive Cruise Control) had much higher braking-reaction time than those manually controlled the vehicle, even when the braking event was expected (Young and Stanton 2007; Merat and Jamson 2009; Rudin-Brown, Parker and Malisia 2003). Deceleration rates with ACC were twice that of CCC (Conventional cruise control) and ACC was significantly less safe when compared to manual driving (Fancher, et al. 1998, Rudin-Brown, Parker and Malisia 2003). Moreover, when regaining driving control from the automated system was needed, the driver also demonstrates worse performance (Merat, et al. 2010).

  • Trust

The degree of trust a human placing in an automation system is one of most critical factors that influences the operator’s use of the complex automated systems (Jones 2013).

Lee and Moray (1992) consider that human–automation trust depends on the performance, process, or purpose of an automated system. To some degree, performance-based trust depends on how well the automated system completes a task. The degree of process-based trust varies based on the operator’s understanding of the methods that the automated system uses to perform tasks, In the meantime, purpose-based trust depends upon the designer’s intended use of the automated system. At the beginning of experiencing the automated systems, people commonly prefer to believe that automated systems are perfect because it provides expertise that the user may lack (Lee and Moray 1992; Kantowitz, Hanowski and Kantowitz 1997). From this perspective, the initial reaction of the persons is faith. Once the system encounters errors, the trust rapidly dissolves. As relationships with automated systems progressing, dependability and predictability replace faith as the primary basis of trust (Madhavan and Wiegmann 2007). For a lower degree of trust, Ma (2005) thinks that it may pose a higher mental demand on human operators. The operators will have to monitor both system states and automaton states. As a result, it may influence operators’ SA, which requires more mental attention, thus reducing operator perception, comprehension and projection of system states and environment knowledge.

Marsh and Dibben (2003) identify trust at three different layers: dispositional trust, situational trust, and learned trust. Dispositional trust refers to an individual’s enduring tendency to trust automation. Situational trust depends on the specific context of an interaction, but variations of the trust in an operator’s mental state can also change situational trust. Learned trust, the third layer, is based on past experiences relevant to a specific automated system. Hoff and Bashir (2015) summarize and construct the various factors influencing trust and reliance based on the three layers. For the dispositional trust, they reveal four primary sources of variability, which are culture, age, gender, and personality. For the situational trust, they summarize two broad sources:the external environment and the internal, context-dependent characteristics of the operator. For the learned trust, they divide them into two categories: initial and dynamic. The corresponding factors are outlined in Fig. 2.

Fig. 2.
figure 2

Full model of factors influencing trust in automation (adapted from Hoff and Bashir 2015)

Incorrect levels of trust may result in three possible outcomes, which are misuse, disuse and abuse. On the one hand, if automation system users violate critical assumptions and rely on the system inappropriately, it will lead to misuse. In such condition, drivers often do not question the performance of automation or check the automation status (Saffarian et al. 2012). With this in mind, drivers may think it is safe to engage in the secondary tasks. In fact, the automation system may be less capable than it actually is (Rudin-Brown and Parker 2004a; Hoedemaeker and Brookhuis 1998). On the other hand, if the users reject to utilize the automation system, it leads to disuse. In this case, too little trust may result in ignoring or negating associated benefits with its use (Parasuraman and Riley 1997). Finally, abuse means that designers introduce an inappropriate application of automation. Accidents may happen if operators misuse automation as a result of over-trusting it, or disuse automation based on under-trusting it (Parasuraman and Riley 1997). Although the level of trust in automation depends on an accurate understanding of the purpose, operation, and historical performance of the automation (Lee and Moray 1992), users do not always make correct assessments of these components and often rely on automation inappropriately (Jones 2013).

3 Method

3.1 Literature Search

For the purpose of the present literature review, we followed the steps for a systematic literature review. Firstly, we searched related database such as ‘Web of Science’, ‘Google Scholar’, and some related journals. Then the in-depth review took place. After that, main issues of impacting the time of taking over were proposed. Finally, the literature review was included if it was relevant to the topic.

Fig. 3.
figure 3

The process of driver taking over from the automation system

3.2 Frame of the Review

Since there are some situations that automation systems of Level 2 and 3 vehicles cannot handle, the human driver must take over control from the system within a limited time window. In fact, the transfer of driving control includes two stages. The first is hand-over from the automation systems. And the second stage is take-over by the driver. Before and during the first stage, the human driver may often perform a secondary task or be out of the loop to a certain degree with different levels of automation driving. This out-of-loop performance problem results in deteriorated reactions in cases of take-over requests (Endsley and Kaber 1999; Kaber and Endsley 2004; Neubauer et al. 2012). When receiving a signal of take-over requests, the drivers have to pull their attention from distraction back to driving. Before conducting take-over behaviors, the drivers have to construct a mental model to make decisions and select appropriate actions according to traffic conditions and vehicle conditions (Zeeb et al. 2015). The whole process of the driver taking over from the automation system while doing secondary tasks previously or being distracted for a while is shown in Fig. 3.

The required take-over time (RTT) is the time from a signal issued by automation systems to a human driver completing take-over, which should be less than available take-over time (ATT). RTT is composed of perception time, cognitive processing time and reaction time (Zeeb et al. 2015). Therefore, If the RTT time is less than ATT, then the driver will be able to successfully resume control.

4 Results and Discussions

The take-over time is determined by situation variables such as traffic complexity, human-machine interface, and level of driver distraction, and driver variables (Son and Park 2017; Zeeb et al. 2015).

For the distraction, several studies believe that secondary tasks influence take-over quality and time (Merat et al. 2012; Merat et al., 2014; Radlmayr et al. 2014). Van den Beukel and Van der Voort (2013) show that drivers who are out-of-the-loop will fail in almost half of the occasions (47.5%) to avoid an unexpected event, while are ‘safe’ during manual driving. If transition to manual driving from automated driving systems is required, Merat et al. (2014) find the average time of take-over is 10 s when drivers are attentive. And when drivers are less attentive, the average time regaining control will be up to 35-40 s. However, Gold and Bengler (2014) discover that the reaction time is typically less than a second for the first gaze at the scenery, 1.5-1.8 s for the first contact with steering wheel and about 1.5 s until the foot is on the brake pedal. The results imply that, for the whole time of taking over, reaction time is relatively certain and less while the perceiving and decision-making time is relatively uncertain and longer. Therefore, the length of time of take-over is mainly determined by the time of perceiving and decision-making.

Perceiving surroundings corresponds to the first level or the first and second level of SA, which is to perceive the status, attributes, and dynamics of relevant elements in the environment and to comprehend the situation. Therefore, for drivers driving under automated driving mode for a long time, the amount of time of taking over from the systems is affected by two aspects according to existent research. The first aspect affecting the time of taking over is characteristics of traffic conditions in the external environment. The traffic conditions include the complexity of the road, other vehicles in the vicinity, and the level of the dangerous situation (Gold et al. 2016, Radlmayr, Gold, Lorenz, Farid, and Bengler 2014). The second aspect is the inherent and dynamic characteristics of the drivers’ adaptation to automation. The inherent characteristics are drivers’ age, personality, education, and experience with similar systems (Körber, Gold, Lechner, and Bengler 2015; Bao et al. 2012; Koustanaï et al. 2012; **ong, et al 2012). If drivers rely on highly automated driving systems over long periods of time, their driving skills may be decreased (Parasuraman, Sheridan, and Wickens 2000; Rudin-Brown and Jamson 2013; Cunningham and Regan 2015; Matthews, et al. 2010). Especially, when the dependence or trust surpasses a certain level, overtrust occurs (Lee and See 2004). Even when there is an emergency situation, overtrust could lead to longer delays and it takes more time for drivers to resume control. The dynamic characteristic comes from secondary tasks or tasks non-related to driving, as increasing automation leads to more time spent looking away from the forward roadway to the secondary tasks (Radlmayr, et al. 2014; Matthews, et al. 2001; Merat, et al. 2010; Merat, et al. 2012; Carsten, et al. 2012). Therefore, the more time looking away from the forward roadway, the more time needed for regaining control from the automation.

Since the time needed for taking over depends on how long the driver needs to gather information from the environment and develop sufficient situation awareness to make decisions, in-vehicle driver-interface are designed to support drivers so that they can retrieve control safely and adequately when required. Information presented for SA not only conveys warning signals but also rich contents like status of automated system or vehicles, traffic condition. According to presentation time dependent on the time to collision, warning signals can be categorized into urgent warnings, warnings, and early warnings or information (Götze, et al. 2014). In order to provide warnings that allow drivers to turn back into the control-loop on time, this kind of information is relative simple with less content. Usually, an audible interface is recommended (Lee et al. 2001). Also, a tactile feedback can retrieve attention from distracted drivers. In addition, a tactile stimulus can promote drivers to select and speed-up correct control when compared with audible signals (Fitch, et al. 2011; Flemisch et al. 2014). As for the urgency of warning signals presented, Van den Beukel, Van der Voort, Eger (2016) summarize that there is a fundamental relationship between perceived urgency and the intensity of the warning signals such as frequency, wavelength, pace and duration, etc. Moreover, researchers show that warning signals must be appropriately timed to ensure safety, which are not too late as to give the driver sufficient time to successfully re-engage, and not too early as to lack concentration for the driver (Cunningham and Regan 2015; Gold, et al. 2013; Lees and Lee 2007). However, as for what degrees of risks and how close to the accident scene are safe to arouse drivers to take over the control and may not startle and confuse the drivers, further quantitative research is still needed.

The driver-interfaces with content-rich information can be separated into situational information and conditional information. The situational information is referred to specific traffic situations such as location of the hazard, road congestion in the vicinity,the weather condition and so on. The conditional information represents the vehicle state, e.g. speed, direction, and the state of the automated driving system. Although some researchers provide evidence that continuous information improves driver’s SA (Martens and Van den Beukel, 2013; Stanton, Dunoyer and Leatherland 2011), some other studies also think a continuous display could result in confusion and distraction (Martens et al., 2008). Beggiato et al. (2015) conclude that information needs change from manual operating to partially and highly automated driving. For partially and highly automated driving, information for monitoring and supervising the automation becomes more important than those related to the driving task. And the information should provide transparency, comprehensibility and predictability of current and future system actions. In addition, for improving driver-automation cooperation, automation uncertainty should be presented (Beller, Heesen and Vollrath 2013). As for optimal interface to understand the information and take quick appropriate decisions, some empirical research studies underline some factors influencing drivers’ reading and comprehension (Cristea and Delhomme 2015), including the length of the message (Arditi 2011), color use (Shaver and Braun 2000), presence of pictograms (Shinar and Vogelzang 2013), type of display device (Gertner 2012), type of message (Wang, Keceli and Maier-Speredelozzi 2009) and optimal modality of communicating a hand-over request (Naujoks, Mai and Naukum 2014). According to the previous review, main human-factor issues contributing to the time of taking over are summarized in Table 1.

Table 1. The influence of main human factor issues on the time of taking over

However, as how much and what kind of risky information presented to the drivers at that moment is safe, further empirical evidence are still needed. Additionally, so far almost all studies about transition of control are conducted on simulators, because it is unethical to test loss of situation awareness on the real road and its duration on the open road with regard to acute threats. Consequently, the actual data of dangerous transition of control and trust in the reliability of the automated system while driving on the real road can’t be known. Therefore, it is impossible to determine which situation is appropriate for and which is not suitable for the driver to take over based on actual data.

5 Conclusion and Future Work

In summary, this review of the literature has highlighted human factors influences the transition from automated to manual driving. Because the combined performance of the driver and automation will be in existence from now to the foreseeable future, the drivers’ responsibilities will change significantly in these partially automated driving systems. They transform from total control to be only primarily for monitoring and supervising the driving task of inattention, as a result reduced situational awareness and manual skill degradation will happen. Consequently, the shifting role of the human driver may lead to safety problems. Since automated technologies are being introduced into the market increasingly, these latent and urgent risks should be addressed in future research.