Introduction

Wearable devices, often referred to as electronic technologies or computers that are incorporated into items of clothing and accessories that can comfortably be worn on the body, have witnessed a significant rise in marketing size these years (Yang et al., 2016). As reported by Globe Newswire (2023), in 2022, the global market generated around USD 137.89 billion in revenue, with expectations to reach around USD 1358.18 billion by the end of 2035. However, along with the rapid increase in the market of wearable devices, privacy-related events have also been prominent and widespread (Banerjee et al., 2018), such as the data breach at GetHealth, a health and wellness company, which exposed over 61 million Fitbit and Apple user records online, including personal data like names, birthdates, weights, heights, genders, and geographic locations (Landi, 2021).

Apps installed on wearable devices are usually specifically designed to collect individual information, such as heart rate, sleep patterns, and physical activity (Williams et al., 2019), which can be leveraged to delve deeper into users’ privacy (Tian et al., 2019). In practice, app providers use this data to offer personalized services (Steinfeld, 2015), but this has led to heightened user concerns about data leakage and misuse (Morar Consulting, 2016; Pike et al., 2017). To address this, app providers often include privacy policies during app installation, outlining their data collection and usage rights (Steinfeld, 2015). However, these policies are frequently overlooked by users due to their length and complexity (Angulo et al., 2012; Tsai et al., 2011) or because of users’ attitudes and mental fatigue (Alashoor et al., 2023).

This significant gap between users’ privacy concerns and actual behaviors has inspired extensive research to explore factors that influence users’ attention paid to privacy policies. However, most of the relevant research is grounded in the context of mobile phones (Steinfeld, 2016; McDonald and Cranor, 2008; Acquisti and Grossklags, 2005; Angulo et al., 2012; Meinert et al., 2006; Nissenbaum, 2011; Tsai et al., 2011), with little attention paid to wearable devices. Indeed, user behaviors significantly differ when using wearable devices compared to mobile phones or other devices (Sztyler et al., 2017). A significant difference is that wearable devices often have limited capabilities, necessitating users to install apps with a strong sense of necessity (Seneviratne et al., 2017). In other words, users perceive a high level of necessity when installing apps on their wearable devices. Perceived necessity, which refers to the extent to which users deem it necessary to install certain apps (Bigot and Rouet, 2007), emerges as a crucial factor in understanding wearable device users’ attention to privacy policies.

In addition to emphasizing the role of perceived necessity in sha** users’ attention toward privacy policies, we consider the influence of threat clues, which have been identified as crucial factors that influence users’ behaviors toward privacy policies. For instance, Christin et al. (2013) find that when presented with picture-based warnings, users are more motivated to read privacy policies. In practice, threat clues are also frequently used to intervene in users’ attention to privacy policies (Krol and Preibusch, 2016). Hence, to achieve a deeper understanding, we further examine whether the factor of threat clues could affect the role of perceived necessity. Therefore, we intend to answer the following two questions: (1) How does perceived necessity affect users’ visual attention toward privacy policies when installing apps on wearable devices? And (2) Whether and how does perceived necessity interact with the role of threat clues?

To address the research questions, we apply the dual process theory (James et al., 2015; Wang et al., 2020) and propose that a high (vs. low) level of perceived necessity will trigger a strong (vs. weak) sense of compulsion (Bigot and Rouet, 2007; Maule et al., 2000), which motivates users to adopt less (vs. more) analytical processing (Alashoor et al., 2023). This will drive users to rely more (vs. less) on prior knowledge to complete a given task (Bigot and Rouet, 2007; Marsh et al., 1999). As a result, they will place less (vs. more) attention on privacy policies, which can be reflected by fewer visual fixations. Moreover, we also intend to investigate the effect of the presence of threat clues on the role of perceived necessity. We propose that when users perceive a high level of privacy risks from threat clues, users will experience psychological discomfort when their perceived necessity is also at a high level, enhancing their avoidance of privacy policies, which will enlarge the discrepancy of users’ attention paid to privacy policies affected by different levels of perceived necessity. In contrast, a low level of threat clues will mitigate the emphasis on potential privacy risks, leading to the reduced effect of perceived necessity.

To test these hypotheses, we present two experiments involving both self-report and eye tracking evidence to study the impacts of perceived necessity and threat clues and delve into how these factors influence users’ processing modes (analytical vs. automatic) when assessing privacy policies. Our research provides three primary contributions to the literature on users’ behaviors toward privacy policies. First, to the best of our knowledge, this paper is among the first to examine the unique role of perceived necessity in influencing users’ privacy reading behavior in the context of wearable devices and its interaction with threat clues. This provides a novel perspective and extends previous research that predominantly focuses on mobile phones.

Second, our research expands previous research that mainly focuses on behavioral outcomes by delving into users’ information searching process underlying the effects of perceived necessity using eye tracking technology. Third, we deepen our understanding of users’ behaviors toward privacy policies when installing apps by disclosing the underlying cognitive mechanisms driving the effects of perceived necessity and threat clues based on the dual process theory. These findings also offer actionable implications for app providers to induce users to pay more attention to privacy policy, especially by manipulating the factors of perceived necessity and threat clues. On the other hand, we also suggest that users should remind themselves to pay double attention to privacy policies when they install apps with high necessity, as the high level of perceived necessity will potentially lead them to neglect the policies.

Theory development and hypotheses

Users’ behaviors toward privacy policy

When we install apps onto our electronic devices, we are often required to accept privacy policies. These documents are designed to explain how app providers can gather and use personal information (Steinfeld, 2015). Despite assurances that personal data is utilized to customize services, users remain worried about potential privacy breaches after agreeing to these terms (Morar Consulting, 2016; Pike et al., 2017). Yet, it is rare for users to thoroughly read through these policies. A mere 4% do so consistently, while a notable 55% confess to having never perused the terms at all (dos Santos Brito et al., 2013).

This discrepancy between stated concerns and actual behavior can have grave repercussions. For instance, users may initiate legal proceedings against app providers if they suspect a data leak, often citing policies as being vague or key terms as lacking prominence for proper understanding. Such disputes can not only financially strain providers but also elicit public criticism over business ethics (Hong and Ahmad, 2016). Consequently, significant research efforts have been spent on identifying the factors that contribute to users’ neglect of privacy policies, such as their complexity, use of legal jargon, and length (Angulo et al., 2012; Milne and Culnan, 2004; Nissenbaum, 2011; Tsai et al., 2011). For instance, McDonald and Cranor (2008) highlight that users often avoid privacy policies due to their lengthy and complex nature. Winkler and Zeadally (2016) have shown that the likelihood of a user engaging with a policy depends on how accessible it is and where it is located within the app’s interface. Steinfeld (2016) emphasizes that users are more likely to ignore a policy that isn’t presented to them by default.

On the other hand, wearable devices have gained widespread popularity in daily life for tracking health and activity data. The sensitive nature of the data collected by wearable devices has sparked heightened privacy concerns among users (Williams et al., 2019). This growing concern may lead to increased disputes between users and app providers, underscoring the need to delve into user behaviors regarding privacy policies. It is worth noting that wearable devices differ significantly from the mobile phones that most existing literature focuses on. One key distinction is the limited capacity of wearable devices. This unique characteristic necessitates users to be selective in their app installations, prioritizing essential apps and giving rise to a pronounced role of “necessity.” For instance, swimmers perceive a great sense of necessity to use wearable device apps to track their activity, as smartphones are not an option in aquatic environments. The concept of “perceived necessity”—representing the extent to which users feel compelled to install certain apps—is a psychological factor that differentiates between app installations on smartphones versus wearables (Hong and Ahmad 2016). However, this factor has received limited attention in the existing literature. Our research seeks to fill this gap by concentrating on the context of wearable devices and the unique role of perceived necessity. In the next section, we apply the dual process theory to discuss how users’ perceived necessity affects their attention to privacy policies.

The effect of perceived necessity on users’ reading behaviors toward privacy policy: based on the dual process theory

Dual process theory comprises a set of theories within social and cognitive psychology suggesting the presence of two fundamentally different modes of information processing during judgment and decision-making: System 1 and System 2 (Evans, 2009; Kahneman et al., 2011; Lieberman et al., 2002; Lieberman, 2007; Evans and Stanovich, 2013). While various descriptions exist for these two types of modes, scholars generally concur that System 1 processes are characterized by being unconscious, swift, automatic, effortless, intuitive, heuristic, and experiential. In contrast, System 2 processes are conscious, slow, deliberate, effortful, analytical, systematic, and rational (Evans, 2009). These two processing systems can significantly influence individuals’ thoughts, attitudes, and behaviors toward target tasks. For instance, Camerer et al. (2005) found that when individuals use an automatic processing mode (System 1), they tend to rely more on previous experience or knowledge to process the situation. In this case, information is processed quickly, leading individuals to pay less attention to information. In contrast, when individuals use an analytical processing mode (System 2), they tend to invest more cognitive resources to process presented information. Consequently, information is processed slowly, prompting individuals to pay more attention to information.

Perceived necessity is one of the primary dimensions on which people think about consumption and perhaps the most fundamental concern in consumer behavior (Norris and Williams, 2016). In our context, perceived necessity indicates items reflecting personal needs, denoting the degree to which users perceive the installation of specific applications as essential, emerging as a pivotal factor in influencing users’ behaviors when installing apps on wearable devices due to the limited capacity. When users perceive a high necessity to install an app, this perception creates a strong sense of compulsion to complete the installation, eliciting an automatic processing mode (Bigot and Rouet, 2007; Marsh et al., 1999). As aforementioned, an automatic processing mode often leads to relying more on prior experiences to evaluate situations, which results in paying less attention to specific information, such as privacy policies. Hence, we suggest that with a strong perceived necessity, users are likely to give less focus to the installation process, including the scrutiny of privacy policies, primarily due to the dominance of automatic processing over analytical processing.

Conversely, a low level of perceived necessity implies the presence of alternatives to the app. In such cases, users might opt for an app with more advanced functions or a better interface. This selection process involves a lower compulsion level, potentially reducing the psychological pressure and allowing for a more thorough consideration of privacy policies during installation (Payne et al., 1995; Kaplan et al., 1993). Therefore, users are more likely to carefully review privacy policies to ensure they align with their data leakage concerns, a process more aligned with analytical processing. Thus, we propose the following hypotheses:

H1a: A high level of perceived necessity will lead users to spend less time reading the privacy policy than a low level of perceived necessity.

H1b: A high level of perceived necessity will make users less likely to adopt the analytical processing mode than a low level of perceived necessity.

Furthermore, the presence of threat clues has been widely used by app providers to heighten users’ awareness of privacy risks and motivate them to review privacy policies. Recent research has underscored the profound influence of threat clues on user behavior regarding privacy policy engagement. Mamonov and Benbunan-Fich (2018) find that exposure to news about information breach threats could prompt users to be more circumspect in their disclosure of private information, resulting in strengthened password protocols. Similarly, Chang et al. (2022) reported that the intensity of a perceived threat has a positive correlation with privacy protection actions. Therefore, to enhance our understanding of perceived necessity, we explore the interaction between threat clues and perceived necessity in the next section.

The interaction between perceived necessity and threat clues

Recent studies have highlighted a key determinant in users’ responses to privacy policies: the role of threat clues (Pizzi and Scarpi, 2020; Estrada-Jiménez et al., 2017; Christin et al., 2013). These clues, often manifested as words or images that warn of potential privacy breaches (Fox, 1996; Dijksterhuis and Aarts, 2003), have been shown to significantly influence user behavior. For instance, De Oca and Black (2013) posited that the perception of privacy threats, prompted by these clues, can evoke automatic defensive reactions in users and demand greater cognitive resources when navigating privacy concerns. This underscores the intimate connection between threat clues and users’ information searching and processing.

In light of this, we posit that the presence of threat clues may influence the impact of perceived necessity on users’ engagement with privacy policies. Specifically, when users are presented with strong indicators of potential threats to their personal information, as noted by Mamonov and Benbunan-Fich (2018), their concerns regarding data leakage are likely to be amplified, which potentially drives them to pay more attention to privacy policies. However, if users concurrently perceive a high necessity to install the app, which we have argued tends to lead them to detract from their consideration of privacy policies, they may find themselves in a psychological tug-of-war between the urge to overlook and the need to attend to privacy policies due to the respective influences of perceived necessity and potential threats. This internal conflict can result in significant psychological discomfort, leading users to shy away from confronting disconcerting information – a phenomenon often referred to as the “ostrich effect” (Karlsson et al., 2009). This effect may paradoxically reduce users’ reliance on analytical processing and cause them to disregard privacy policies (dos Santos Brito et al., 2013).

In contrast, when the perceived necessity for an app is low, there is less compulsion to act hastily. In such scenarios, threat clues are less likely to provoke psychological discomfort and more likely to bolster the motivation to meticulously evaluate privacy policies (Christin et al., 2013). Additionally, users may have a broader array of options, allowing them to scrutinize and compare privacy policies across different apps to find one that aligns with their privacy concerns—a process characteristic of analytical processing. Thus, we hypothesize that the presence of high-threat clues will enlarge the disparity in the attention users devote to privacy policies under conditions of high versus low perceived necessity.

On the other hand, in the presence of low-threat clues indicating minimal risk of privacy breaches, user engagement with privacy policies tends to decrease (Christin et al., 2013). Under these conditions, the low prioritization of privacy concerns contributes to a general decline in the scrutiny of privacy policy terms. Consequently, even if the perceived necessity for installing an app is low, users may not be motivated to closely examine the privacy policies due to the diminished emphasis on potential privacy risks. This behavior suggests a less significant role of perceived necessity in directing users’ attention to privacy policies when they face low levels of threat clues, as well as in influencing users’ processing mode. Thus, we propose the following hypotheses:

H2a: Threat clues will interact with the perceived necessity to affect users’ attention paid to the privacy policy. Specifically, when users perceive a high level of threat clues, the discrepancy in users’ attention paid to privacy policy affected by the levels of perceived necessity (as described inH1a) will be enlarged. In contrast, when users perceive a low level of threat clue, the discrepancy will be reduced.

H2b: Threat clues will interact with the perceived necessity to affect users’ likelihood of applying an analytical processing mode. Specifically, when users perceive a high level of threat clues, the discrepancy in users’ likelihood of applying an analytical processing mode affected by the levels of perceived necessity (as described inH1b) will be enlarged. In contrast, when users perceive a low level of threat clue, the discrepancy will be reduced.

Previous research has predominantly relied on self-reported methods to assess users’ engagement with privacy policies. However, such methods are prone to producing biased results and are limited in their ability to track the process of information searching (Choi and Pak, 2005). To mitigate these limitations and obtain more accurate evidence, we have incorporated eye tracking technology to quantitatively measure individuals’ reading behavior. The following section will discuss the contribution of this technology to our research.

The application of eye tracking

Eye tracking provides specific parameters to gauge individuals’ searching behaviors and quantify visual attention, such as fixation duration and initial fixation. Previous research has widely adopted this technology to study individuals’ information searching behaviors. For instance, Sheng et al. (2020) and Steinfeld (2016) use eye tracking technology to reveal which design elements attract or retain the attention of system users. Ozimek et al. (2019) use eye tracking technology to explore the effectiveness of different privacy notifications in capturing and retaining users’ attention. Our study mainly wants to measure the visual attention paid to privacy policy. Previous studies used fixation duration as a parameter to represent the time a user spends reading a privacy policy (Duchowski, 2017), which can be accurately applied to our study.

Therefore, we also use the parameter fixation duration to reflect users’ attention to the privacy policy. According to H1a, we contend that when users perceive a high level of perceived necessity, they exhibit reduced fixation durations on the privacy policy in contrast to those experiencing a low level of perceived necessity. In line with H2a, the distinction of users’ fixation durations on the privacy policy between the high and low levels of perceived necessity will be enlarged when users sense a high level of threat clue. In contrast, the distinction will be diminished when users perceive a low level of threat clue.

Study overview

We performed two studies to examine the hypotheses, and the theoretical framework is shown in Fig. 1. In study 1, we adopt eye tracking technology to analyze users’ information searching process by exploring the effects of perceived necessity and threat clues to examine H1a and H2a. However, this approach still cannot reveal the cognitive mechanisms underlying the information-searching processes. To address this, in study 2, we extend study 1 by evaluating cognitive processes underlying information searching to examine H1b and H2b. We adopt the process-dissociation procedure (PDP) to explore how the processing mode (analytical vs. automatic) contributes to the effects of perceived necessity and threat clues.

Fig. 1
figure 1

Theoretical framework.

Study 1: the effect of perceived necessity on users’ attention and its interaction with threat clues

In Study 1, our objective is to investigate how perceived necessity and the presence of threat clues influence users’ information searching process during the reading of privacy policies when installing apps on wearable devices. We employed eye tracking technology to record participants’ information searching behaviors (i.e., eye movements). Building on H1a, we anticipate that a high level of perceived necessity (compared to a low level) will lead to reduced visual attention towards the privacy policy, and there will be an interaction effect between perceived necessity and threat clues on users’ attention paid to the privacy policy (H2a). As previously mentioned, we utilized fixation duration as a parameter to gauge participants’ visual attention, with higher fixation duration indicating greater attention allocation. We implemented a 2 (threat clue level: low vs. high) × 2 (perceived necessity: low vs. high) between-subject design.

Participants and method

Participants

We enrolled 111 participants (34 Males, Mage = 20.89, SD = 2.51) from a large Chinese public university. All participants were right-handed, possessed normal or corrected-to-normal vision, and had no prior involvement in similar experimental studies. To ensure the integrity of our data, we conducted a thorough review of the eye movement records. As a result, we excluded three participants whose cumulative fixation duration fell below one second, in line with the criteria suggested by Hessels et al. (2017). Additionally, we identified four participants who did not successfully pass the manipulation check. Consequently, our final data analysis comprised 104 participants who met the inclusion criteria.

Procedure

All participants were randomly assigned to one of four experimental conditions. Initially, they were exposed to a pair of pieces of news to manipulate the threat clue levels. In the high-threat condition, the news described instances of information privacy breaches, while in the low-threat condition, they were presented with unrelated news. Subsequently, participants assessed the extent to which they perceived the threat of data leak using a single-item, 7-point scale on the second screen (1 – not at all, 7 – very much). Detailed information can be found in Appendix A. They were asked, “In your opinion, how much of a threat is the leakage of private personal information to you?” This question served as a manipulation check for the threat level.

On the third screen, participants were presented with another pair of pieces of news to manipulate the level of perceived necessity (Aro and Wilska, 2014). Specifically, in the high necessity condition, the news conveyed that using a specific app was compulsory. In contrast, in the low necessity condition, it was indicated that alternative software could be used. Detailed information can be found in Appendix B. Following this, participants were asked to rate the extent to which they perceived the necessity of installation of the app using a single-item, seven-point scale (1 – not at all, 7 – very much). This question served as another manipulation check for the necessity level.

Before conducting this experiment, we carried out a pretest to assess the effectiveness of these two manipulations using another sample of participants from the same pool. Further details can be found in Appendix C. Then, the formal task started. Participants first read a cover story we prepared to be familiar with our experiment scenario, shown as follows:

[Smart bracelet is a wearable smart device worn on the wrist suitable for daily use. It can record your exercise steps, mileage, sleep, heart rate, and other real-time information so as to serve life. A privacy policy is a statement of how the app service provider collects, uses, retains, and discloses personal information. In general, you can only use the mobile app if you agree to the privacy policy. Let’s say you have purchased the bracelet shown in the picture and are about to use it to keep track of your movements. You’ve learned that enabling the corresponding app for the bracelet allows you to access more comprehensive exercise data and health information. However, you can still use the smart bracelet without enabling the app, but with some limitations on certain functions. You are now trying to enable the feature, so you install and open the app for the first time.]

After that, we presented participants with privacy policies related to the app. Participants were instructed to read them in self-paced. In real life, when users need to install software on their wearable devices, such as the widely used Apple Watch, it is usually done through their phones. We take the Apple Watch as an example; the app installation process is managed via an app on the iPhone, often referred to as “Watch.” Users first install the apps on their iPhones and subsequently sync them to their Apple Watch. That is, although the apps run on wearable devices, the installation and reading of privacy policies during setup usually occur on mobile phones. To replicate this real-life installation process as accurately as possible in our study, we displayed the privacy policies and manipulation stimuli they would appear on mobile phones to participants, using the form of web pages to maintain consistency with the familiar notification style of real-world mobile device usage. Accordingly, participants’ eye movements were recorded when reading the experimental materials. The procedure and main materials in this study are shown in Fig. 2A–C.

Fig. 2: The experimental procedure and materials for Study 1.
figure 2

A Procedure of Study 1; B Picture of bracelet; C Picture of privacy policy.

Results

Manipulation check

We observed that participants in the high necessity condition perceived significantly higher levels of app installing necessity compared to those in the low necessity condition (Mhigh = 5.76, SD = 1.215 vs. Mlow = 3.7, SD = 1.898; F(1, 102) = 55.909, p < 0.001, ηp2 = 0.304). As expected, participants exposed to the high-threat condition perceived significantly higher threat levels than those exposed to the low-threat condition when personal information was leaked (Mhigh = 5.97, SD = 1.149 vs. Mlow = 3.22, SD = 1.577; F(1, 102) = 23.627, p < 0.001, ηp2 = 0.509). These findings showed a successful manipulation.

Visual attention analysis

We performed a 2 × 2 two-way ANOVA and found significant main effects of perceived necessity (F(1, 102) = 12.128, p = 0.001, ηp2 = 0.108) and threat clues (F(1, 102) = 15.026, p < 0.001, ηp2 = 0.131) on the fixation duration, as well as the interaction effect between them (F(1, 102) = 4.064, p = 0.046, ηp2 = 0.039), supporting H2a (as shown in Table 1).

Table 1 Two-factor ANOVA results for threat clue and perceived necessity.

Specifically, as for the effect of perceived necessity, participants directed significantly less visual attention to the privacy policy in the high necessity condition compared to the low one (Mhigh = 15.218, SD = 13.106 vs. Mlow = 27.321, SD = 22.551; F(1, 102) = 12.128, p = 0.001, ηp2 = 0.108), supporting H1a. Conversely, regarding the effect of threat clues, fixation durations were significantly longer in the high-threat clue condition compared to the low-threat clue one (Mhigh = 27.038, SD = 22.835 vs. Mlow = 14.314, SD = 10.730; F(1, 102) = 15.026, p < 0.001, ηp2 = 0.131).

Regarding the interaction effect, when participants perceived a high level of threat clues, variations in perceived necessity levels resulted in significant differences in visual attention. Specifically, a high level of perceived necessity led participants to allocate significantly less visual attention than a low of perceived necessity (Mhigh = 18.250, SD = 3.190 vs. Mlow = 36.502, SD = 3.310; F(1, 102) = 15.763, p < 0.001, ηp2 = 0.136). In addition, when participants perceived a low level of threat clues, the high and low levels of perceived necessity did not lead to significant differences (Mhigh = 12.075, SD = 3.248 vs. Mlow = 16.943, SD = 3.520; F(1, 102) = 1.033, p = 0.312, ηp2 = 0.010) (see Table 2 and Fig. 3). These findings provided support for both H2a.

Table 2 Simple effect analysis of threat clue and perceived necessity.
Fig. 3
figure 3

Graph of interaction between perceived necessity and threat clues.

We also carried out a simple analysis based on the levels of perceived necessity. Specifically, when participants perceived a low level of perceived necessity, distinctions between high and low-threat clue conditions yielded significant differences in visual attention (Mhigh = 36.502, SD = 3.310 vs. Mlow = 16.943, SD = 3.520; F(1, 102) = 16.386, p < 0.001, ηp2 = 0.141) (see Table 2 and Fig. 3). However, when participants perceived a high level of perceived necessity, variations in high and low-threat clue conditions did not result in significant differences (Mhigh = 18.250, SD = 3.190 vs. Mlow = 12.075, SD = 3.248, F(1, 102) = 1.840, p = 0.178, ηp2 = 0.018).

To enhance the visualization of the impacts of threat clues and perceived necessity on fixation duration, we also presented the results using a hotspot image (see Fig. 4). A hotspot image provides an intuitive representation of the areas on the page that garnered the most visual attention, making the results easily comprehensible (Röck et al., 2018; Coppola et al., 2014). We utilized the Tobii Pro Spectrum 1200 to define Areas of Interest (AOI) and generate the hotspot image, which is also widely adopted by previous literature (Ahn et al., 2019). In Fig. 4, participants’ visual fixation durations were depicted using a color scale ranging from shades of green to red. Specifically, the redder the color was, the longer the visual fixation lasted, while conversely, the greener the color was, the shorter the visual fixation lasted.

Fig. 4
figure 4

Hotspot image of threat clue and perceived necessity.

Upon examination of the fixation heatmap, our findings were reaffirmed. Specifically, we observed that in the low necessity condition (compared to the high necessity condition), privacy policies attracted more visual attention from participants, supporting H1a. Furthermore, in conditions with high-threat clues (as opposed to low-threat clues), participants directed more visual attention toward privacy policies, supporting H2a.

Discussion

In Study 1, we vividly show users’ information searching processes when reading the policies using eye tracking technology. We provide direct evidence for supporting H1a and H2a concerning the effects of perceived necessity and threat clues on users’ attention paid to privacy policies. Our findings indicate that a heightened perceived necessity for app installation correlates with diminished scrutiny of privacy policies. This trend is further exacerbated by threat clues that heighten users’ perceived risk of privacy violations. However, this approach still cannot reveal the cognitive mechanisms underlying the information searching processes. To address this, we applied the process-dissociation procedure (PDP) paradigm to disclose users’ cognitive mechanisms using an online experiment in Study 2.

Study 2: cognitive mechanisms underlying the effects of perceived necessity and threat clues on users’ information searching

Study 2 intends to extend Study 1 by unveiling the cognitive mechanisms underlying the effects of perceived necessity and threat clues on users’ attention paid to privacy policies when installing apps on wearable devices, as proposed in H1b and H2b. We expect that a high level of perceived necessity will lead to a less analytical processing path, and there will be an interaction effect between perceived necessity and threat clues on users’ processing mode, as proposed in H2b. To analyze which processing mode users adopt to read privacy policies, we adopt a 2 (threat clue level: low vs. high) × 2 (perceived necessity: low vs. high) between-subject design based on the process-dissociation procedure (PDP) paradigm.

Process-dissociation procedure (PDP) paradigm

The PDP is a psychological method developed by Larry L. Jacoby (1991), and it provides a well-studied method to estimate the contributions of automatic and analytical processes to the performance of a specific task (Payne, 2001). PDP is based on recall tasks, in which participants complete memory tasks under different conditions, and determines the extent to which individuals rely on analytical and automatic processes.

The PDP paradigm consists of two test conditions: inclusion and exclusion. In the inclusion test condition, participants are asked to recall items using all available memory, whether they remember seeing the items consciously or just feel familiar with them. This condition is thought to tap into both analytical and automatic processes. In the exclusion test condition, participants are instructed to recall or recognize only those items that they can consciously remember, excluding items that feel familiar but cannot be consciously placed. This condition is primarily designed to tap into analytical processes. By comparing performance across these two tasks (inclusion and exclusion), we can infer the contributions of analytical and automatic processes. If a participant performs well in both tasks, it suggests strong analytical processes. If they perform well in the inclusion task but poorly in the exclusion task, it suggests a reliance on automatic processes.

Scholars have shown great interest in the information processing procedure of how users read privacy policies, with numerous studies exploring whether this process aligns with a conscious (analytic) or an unconscious (automatic) process (Mo et al., 2006; Wang, 2006). In order to effectively identify the information process mode, the process-dissociation procedure (PDP) has been widely adopted (Destrebecqz and Cleeremans, 2001; Stewart et al., 2009).

Specifically, the PDP assumes that the existence of analytical or automatic processes can only occur in the following three cases: (1) only analytical machining processes exist, and automatic machining processes are missing; (2) only automatic machining processes exist, and analytical machining processes are missing; and (3) analytical and automatic machining processes exist at the same time. We set the probabilities of occurrence of these three types of processing cases as (1) - P1, (2) - P2, and (3) - P3. In addition, the study begins with an assumption on the final result to be solved, assuming Pan as the probability of the existence of analytical processing and Pau as the probability of the existence of automatic processing. Then, according to the basic assumption of independence of the PDP paradigm processes and the multiplication theorem of probability, it can be obtained respectively:

$${{\rm{P}}}_{1}={{\rm{P}}}_{{\rm{an}}}\times (1-{{\rm{P}}}_{{\rm{au}}})$$
(1)
$${{\rm{P}}}_{2}={{\rm{P}}}_{{\rm{au}}}\times (1-{{\rm{P}}}_{{\rm{an}}})$$
(2)
$${{\rm{P}}}_{3}={{\rm{P}}}_{{\rm{an}}}\times {{\rm{P}}}_{{\rm{au}}}$$
(3)

According to the above procedure of PDP, it has two tests: inclusion test and exclusion test. In the inclusion test, the analytic process acts in the same direction as the automatic process. Hence, according to the addition theorem of probability, the probability that a participant correctly recognizes the learning content (Pri) can be expressed as Eq. (4). We substitute Eqs. (1), (2), and (3) into (4) to obtain Eq. (5). In the exclusion test, since the correct recognition in this test is based on the analytical processes, the probability that the participant incorrectly recognizes the learning content (Pro) can only be attributed to the probability that the automatic process exists in the absence of the analytic process, which can be expressed by Eq. (6).

$${{\rm{P}}}_{{\rm{ri}}}={{\rm{P}}}_{1}+{{\rm{P}}}_{2}+{{\rm{P}}}_{3}$$
(4)
$${{\rm{P}}}_{{\rm{ri}}}={{\rm{P}}}_{{\rm{an}}}+{{\rm{P}}}_{{\rm{au}}}\times (1-{{\rm{P}}}_{{\rm{an}}})$$
(5)
$${{\rm{P}}}_{{\rm{ro}}}={{\rm{P}}}_{{\rm{au}}}\times (1-{{\rm{P}}}_{{\rm{an}}})$$
(6)

Using Eqs. (5) and (6), this can be obtained:

$${{\rm{P}}}_{{\rm{an}}}={{\rm{P}}}_{{\rm{ri}}}-{{\rm{P}}}_{{\rm{ro}}}$$
(7)
$${{\rm{P}}}_{{\rm{au}}}={{\rm{P}}}_{{\rm{ro}}}/[1-({{\rm{P}}}_{{\rm{ri}}}-{{\rm{P}}}_{{\rm{ro}}})]$$
(8)

Equations (7) and (8) were used to calculate the probability of the existence of the originally hypothesized analytical process (Pan) and automatic process (Pau), respectively.

Participants and methods

In this study, we implemented PDP based on previous research by dividing it into three stages: the learning stage, the distraction stage, and the recognition stage (Jacoby et al., 1993; **ang et al., 2013; Lacot et al., 2017). Users read stimuli with the app name and its corresponding authorized content in the learning stage. The recency effect, which refers to the tendency for individuals to better remember items or information that are presented to them most recently (Talmi and Goshen-Gottstein, 2006), compared to information presented earlier, is mitigated during the distraction stage by video materials and procedures validated through prior literature.

Participants

We recruited 56 participants (19 Males, Mage = 21.41, SD = 2.18) from a large Chinese public university. Prior to conducting the study, we conducted a power analysis using G*Power 3.1.9 software to determine the appropriate sample size. The analysis revealed that a minimum of 40 participants was needed to achieve a desirable statistical power of 0.8. All recruited participants were right-handed, possessed either normal vision or vision corrected to normal, and had no prior involvement in similar experiments.

Stimuli construction and experimental setting

To ensure the rationality of app authorization content, we engaged eight experts in the field of information systems to curate a list of 30 common privacy authorization items. We allocated these 30 items to ten different apps based on the principles of authorization rationality. Table 3 shows the specific privacy authorization details.

Table 3 App list and 30 privacy authorization items.

As illustrated in Fig. 5, we positioned text on the left side in the red line box to vary the degree of necessity (e.g., “You can choose to use this app, or you can choose not to use it and opt for an alternative app. (low necessity condition)” “You are required to use this app. (high necessity condition)”). Conversely, on the right side in the blue line box, we incorporated images to manipulate the level of perceived threat (e.g., “The software background will automatically turn on microphone monitoring, so be careful about microphone authorization! No privacy at all! (high-threat clue condition)” “It has also passed the information security supervision of the Ministry of Industry and Information Technology, so you can confidently grant microphone permission. (low-threat clue condition)” (Mamonov and Benbunan-Fich, 2018). More sample graphs of experimental stimuli can be found in Appendix D. Since apps on wearable devices are usually installed through mobile phones, we needed to vividly replicate the style of notifications commonly encountered on mobile devices in real-world scenarios. To achieve this, we presented the privacy policy in the format of a web page, similar to Study 1.

Fig. 5
figure 5

Sample graph of experimental stimuli.

Procedure

All participants were randomly assigned to one of four experimental conditions. Initially, during the practice stage, they familiarized themselves with the procedure and completed a questionnaire. This questionnaire included questions about their age, gender, and whether they had color blindness, aiming to exclude participants who might have difficulty distinguishing between blue and red. Subsequently, the formal task started, which consisted of three stages: the learning stage, the distraction stage, and the recognition stage. The learning stage was further divided into two parts. In the first part, participants read sentences displayed on the screen to become acquainted with the app name and its corresponding authorized content, which was highlighted in red (e.g., as shown in Fig. 6).

Fig. 6
figure 6

Experimental stimuli in red color.

On the subsequent screen, a black fixation point remained visible for 1000 ms. These two steps comprised one round, and there was a total of 10 rounds. Following this, the second part of the learning stage was identical to the first half, with the color of the keyword and frame line changing to blue, as depicted in Figs. 7. During the distraction stage, participants viewed an unrelated 3-min video on the screen designed to mitigate the recency effect.

Fig. 7
figure 7

Experimental stimuli in blue color.

In the recognition stage, there were two stages: an inclusion test and an exclusion test. In the inclusion test, participants judged 30 sets of material as a “correct match” or an “incorrect match.” For instance, when considering the ZOOM app, granting permission for camera access aligns with its functionalities, as it is essential for video communication (i.e., correct match). However, authorizing fingerprint payment would be inappropriate since ZOOM does not incorporate or require fingerprint payment capabilities (i.e., incorrect match). After the instructions, we set up two questions to test whether participants understood the instructions well and corrected the wrong understanding in time. In the exclusion test, participants judged 30 sets of material as “new” or “old.” If the match was a match shown in the red box before, then it was judged as “new,” and it was judged as “old” if the match was shown in the blue box before. Similarly, after the instruction, we set two questions to test whether participants understood the instructions well and corrected the wrong understanding in time. The experimental procedure of Study 2 is shown in Fig. 8.

Fig. 8
figure 8

Experimental flow chart & Behavioral experiment procedure of study 2.

Results

Manipulation check

We found that participants in the high necessity condition perceived significantly greater levels of necessity compared to those in the low necessity condition (Mhigh = 5.07, SD = 1.761 vs. Mlow = 3.32, SD = 1.627; F(1, 54) = 49.172, p < 0.01, ηp2 = 0.182). As anticipated, participants in the high-threat clue condition perceived significantly higher levels of threat than those in the low-threat clue condition (Mhigh = 5.90, SD = 0.939 vs. Mlow = 2.72, SD = 1.601; F(1, 54) = 25.344, p < 0.001, ηp2 = 0.602). These results showed a successful manipulation.

Descriptive analysis of processing mode

We initially computed two probabilities: the likelihood of making a correct judgment in the inclusion test (Pri) and the likelihood of making an incorrect judgment in the exclusion test (Pro). Then, we calculated the contribution of the automatic processing mode (Pau = Pro/[1 − (Pri − Pro)]) and the analytical processing mode (Pan = Pri − Pro) (See Table 4). Table 4 reveals that when participants perceived a low level of perceived necessity, the contribution rate of the analytical processing mode was higher compared to when they perceived a high level of necessity.

Table 4 The descriptive analysis of processing mode (M ± SD).

Analytical processing analysis

We performed a 2 × 2 two-way ANOVA and found significant main effects of perceived necessity (F(1, 54) = 16.216, p < 0.001, ηp2 = 0.653) and threat clues (F(1, 54) = 97.739, p < 0.001) on analytical processing, as well as the interaction effect between them (F(1, 54) = 4.066, p = 0.049, ηp2 = 0.073), as shown in Table 5.

Table 5 Two-factor ANOVA for threat clue and perceived necessity.

Specifically, we observed distinct patterns based on users’ perceptions of perceived necessity and threat clues. Specifically, when users perceived a high level of perceived necessity, there was a noticeable decrease in their tendency to employ analytical processing compared to those perceiving low levels (Mhigh = 0.358, SD = 0.067 vs. Mlow = 0.521, SD = 0.089; F(1, 54) = 28.824, p < 0.001, ηp2 = 0.653). Hence, H1b was supported. In addition, when users encountered a high level of threat clues, they exhibited a higher likelihood of engaging in analytical processing in contrast to those encountering low levels (Mhigh = 0.647, SD = 0.074 vs. Mlow = 0.401, SD = 0.076; F(1, 54) = 76.526, p < 0.001, ηp2 = 0.238) (refer to Table 6 and Fig. 9 for details).

Table 6 Simple effect analysis for threat clue and perceived necessity.
Fig. 9
figure 9

Graph of interaction between perceived necessity and threat clue.

Regarding the interaction effect, when users perceived a high level of threat clues, users were less likely to adopt an analytical processing mode when perceiving a high level of necessity than perceiving a low level of necessity (Mhigh = 0.521, SD = 0.089 vs. Mlow = 0.647, SD = 0.074; F(1, 54) = 19.047, p < 0.001, ηp2 = 0.268). Conversely, when users perceived a low level of threat clues, variations in analytical processing pathways between high and low perceived necessity levels were not statistically significant (Mhigh = 0.358, SD = 0.067 vs. Mlow = 0.401, SD = 0.076; F(1, 54) = 1.941, p = 0.169, ηp2 = 0.036), thereby providing support for H2b (see Table 6 and Fig. 9 for further details).

Discussion

Study 2 extends the findings in Study 1 by revealing the effects of perceived necessity and threat clues on users’ information processing modes. Specifically, we find that participants exhibited a heightened inclination towards processing information through an analytical processing mode in the low-necessity and high-threat conditions, supporting H1b. Furthermore, the outcome illuminated the interaction effect by showing that when users perceive a high level of threat clues, the high and low levels of necessity will lead to significant differences in the analytical processing mode. When users perceive a low level of threat clue, the high and low levels of perceived necessity will not lead to significant differences in the analytical processing mode, supporting H2b.

General discussion

This research investigates how perceived necessity and the presence of threat clues influence users’ attention to privacy policies, as well as the cognitive mechanisms that drive these behaviors, based on the dual process theory. Based on both eye tracking and self-report methods, we show that a heightened perceived necessity leads users to spend less time reviewing privacy policies and to be less likely to engage in analytical processing. Conversely, when the perceived necessity is low, users show more engagement with the policies. The presence of threat clues signaling a high risk of privacy leak intensifies this effect, whereas threat clues signaling a lower risk have the opposite effect. The following sections will explore the theoretical and practical ramifications of these observations.

Theoretical implications

This research makes three significant contributions to the growing body of work that investigates users’ behaviors toward privacy policies. Firstly, to the best of our knowledge, it represents the pioneering effort to explore users’ attention given to privacy policies in the context of wearable devices, particularly with a focus on the unique factor of perceived necessity. Although much research has investigated users’ reading behaviors toward privacy policies, most of them are grounded in the context of mobile phones. However, wearable devices have gained widespread popularity and possess substantial market share yet have received limited attention. Our research not only puts focus on wearable devices but also identifies a unique and important factor (i.e., perceived necessity) that distinguishes wearable devices from other contexts, such as mobile phones and tablets, thereby extending the existing literature. On the other hand, we also extend prior studies that have emphasized the pivotal role of threat clues and their impact on user visual attention (Christin et al., 2013). We uncover a significant interaction between perceived necessity and threat clues, offering new insights into user behavior toward privacy policies. This finding adds a new dimension to our understanding of how users process privacy-related information.

Secondly, we extend previous literature that focuses on users’ behavioral outcomes toward privacy policies by using eye tracking technology to uncover the process of information searching behind these behavioral outcomes. Prior research has primarily relied on users’ self-reported claims or their choices in online contexts (Steinfeld, 2016), while these classical methods appear limited to provide direct evidence of how users actually engage with privacy policies. This limits us from getting knowledge about users’ information searching and processing underlying their behaviors. Consequently, we cannot know exactly what elements of the privacy policy are of interest to users. Unlike prior studies, we employ eye tracking technology using the parameter of fixation duration to represent visual attention and utilize heat maps to provide a visual representation of the information retrieval process. This approach yields valuable insights into user behavior, effectively deepening our understanding beyond prior research.

Thirdly, we advance the current research by delving into the cognitive mechanisms that underlie users’ behaviors toward privacy policies, drawing upon the framework of dual process theory. Previous studies have predominantly concentrated on observing users’ external behaviors while paying less attention to the cognitive processing modes that shape users’ reading behavior (Ermakova et al., 2014; McDonald and Cranor, 2008; Schaub et al., 2014). However, these processing modes are crucial. For example, Kehr et al. (2015) discovered that individuals adopting an analytical processing approach tend to weigh risks and benefits more thoroughly, whereas those relying on automatic processing are more likely to draw upon past experiences in making disclosure decisions. Hence, this research unveils the cognitive mechanisms employed by users through the PDP paradigm, enhancing our understanding of users’ cognitive processing and supplementing existing research in this domain.

Practical implications

Our findings offer valuable insights with significant implications for both app providers and users. Firstly, our findings underscore that the imposition of high perceived necessity tends to deter users from dedicating adequate attention to privacy policies. Consequently, app providers seeking to guide users toward a more thorough examination of privacy policies should consider strategies aimed at mitigating the factors contributing to high task necessity. One effective approach involves refraining from using coercive language, such as necessity terms like “need.” By adopting a more user-friendly and accommodating tone in their communications, app providers can create an environment where users feel more inclined to engage with privacy policies on their terms, fostering a sense of autonomy and understanding.

Secondly, our research unveils a significant correlation between the presence of threat clues and user commitment to reviewing privacy policies. When users are exposed to a low level of threat clues, they tend to allocate significantly less time to privacy policies, irrespective of the level of perceived necessity. To address this issue and enhance user engagement, we recommend that app providers consider implementing strategies to introduce more prominent and attention-grabbing threat clues. This may entail offering explicit information about the potential risks associated with inadequate privacy protection, thus compelling users to take privacy policy evaluation more seriously.

However, it is important to exercise caution when users are presented with a high level of threat clues, particularly in conjunction with high perceived necessity. Our findings indicate that, under such circumstances, users exhibit reduced attention to privacy policies. In response, app providers should adopt a nuanced approach. We advise them to convey the significance of privacy without overwhelming users with excessive compulsion or information. This demands a delicate balancing act, ensuring users remain informed and engaged while avoiding feelings of pressure or information overload.

On the users’ side, we strongly advise them to maintain vigilance and heighten their awareness of privacy policies during the installation of apps considered highly essential. The perceived necessity of these app installations might unintentionally cause users to overlook the associated privacy policies. For instance, in cases where users are compelled to install productivity or sports applications, their focus on privacy policies may be overridden by the perceived necessity of the installation. Therefore, it becomes important for users to remind themselves to scrutinize the privacy policy. This proactive approach enables users to fortify the protection of their personal information, effectively mitigating potential privacy risks and ensuring a more secure digital experience.

Limitations and future research

Although our research has significant implications on both theoretical and practical aspects, we also have several limitations that inspire future research. Firstly, we do not account for the influence of app types in our study. Different app categories can exert varying degrees of influence on users’ attention to privacy policies. For instance, users of financial apps tend to be more attentive to privacy policies. In contrast, when installing entertainment apps, users may pay less heed to privacy-related terms. Hence, we suggest future research consider different types of apps to enrich our understanding of users’ behavior.

Secondly, our study does not delve into the role of app providers, which is supposed to significantly impact users’ engagement with privacy policies. User trust levels can vary substantially based on the app provider’s reputation and size (Wu et al., 2012). For instance, users are inclined to exhibit greater trust in well-established entities such as Google, potentially resulting in diminished attention to privacy policies. Conversely, smaller and less-known companies may elicit greater scrutiny from users, thereby encouraging a more thorough examination of privacy policy.

Thirdly, homogeneous participants help control potential confounding factors and enhance the internal validity of the study, while this also affects the generalizability of our findings. College students constitute a crucial user demographic for wearable devices. However, they are well-educated but have less social experience and cannot represent the overall users, which may significantly affect their privacy-related attitudes and subsequent behaviors. For example, college students have a greater propensity and ability to read, which may result in college students being more likely to pay attention to privacy policy (Acquisti and Gross, 2006). Yet, their relative lack of social experience might lead them to prioritize privacy concerns without fully considering the economic implications. Therefore, we believe it is essential to examine the behavior of other populations, such as the working population, with respect to privacy policy. We hope that future research can supplement this gap.

Conclusion

Our research underscores the pressing issue of users overlooking privacy policies during the installation of apps on wearable devices. As dependency on wearables intensifies in our daily routines, a stark contrast remains between the concerns users have over data privacy and their propensity to bypass privacy policies. Drawing from eye tracking and behavioral evidence, this research sheds light on the impact of perceived necessity in the scrutiny of privacy policies on wearable devices and its interaction with threat clues.

The findings deepen our comprehension of users’ behaviors in the context of wearable devices and broaden the existing research on users’ engagement with privacy policies during app installation by leveraging eye tracking technology and delving into underlying cognitive processes. Moreover, this research provides practical recommendations for app providers on how to encourage users to pay more attention to privacy policies by manipulating the factors of users’ perceived necessity and threat clues. We also advocate for users to exercise increased caution and enhance their attentiveness to privacy policies, particularly when the app is deemed highly essential. We hope that this research will bolster digital security awareness and contribute to the cultivation of a safer digital environment for users.