Technology is vital for expansion of the service economy (Huang and Rust 2017). Service robots are expected to change the way services are provided and to alter how customers and firms interact (van Doorn et al. 2017). Service robots are defined as autonomous agents whose core purpose is to provide services to customers by performing physical and nonphysical tasks (Joerling et al. 2019). They can be physically embodied or virtual (for example, voice- or text-based chatbots). The market value for service robots is forecast to reach US$ 699.18 million by 2023 (Knowledge Sourcing Intelligence 2018). SoftBank has sold more than 10,000 of its humanoid service robot, Pepper, since launching it in 2014 (Mende et al. 2019). Pepper is employed by service providers in restaurants, airports, and cruise liners to greet guests and help them navigate the location. It is highly likely that robots will become more common and that customers will have to use them more in the future.

The present study enhances understanding of how customers interact with and experience inanimate objects such as service robots. Marketing has long studied various customer–object relations. For example, studies have taken a sensual perception or affective relational perspective. As customers’ initial responses to objects are often driven by the objects’ sensual appeal, sensory marketing has explored how customers perceive objects through different inputs of their five senses (Bosmans 2006; Peck and Childers 2006). Additionally, scholars have explored emotional customer–object relations. These affective relations mainly occur in contexts regarding consumption objects and possessions, where product attachment and material possession love impact consumption behavior (Kleine and Baker 2004; Lastovicka and Sirianni 2011). Another literature stream examines how customers anthropomorphize objects, such as service robots, and assign human characteristics to them (Epley et al. 2007).

To facilitate customer–robot interactions, marketing managers often favor humanlike service robots to increase customers’ perceptions of social presence (Niemelä et al. 2017). These robots have a human shape, show human characteristics, or imitate human behavior (Bartneck et al. 2009). In the virtual context, chatbots’ mimicry of human behavior can often convince customers that they have been interacting with a human (Wünderlich and Paluch 2017). Novak and Hoffman (2019) note a growing consensus in marketing and psychology that anthropomorphism is important for understanding how customers experience inanimate objects (MacInnis and Folkes 2017; Waytz et al. 2014). Anthropomorphism in this study refers to the extent to which customers perceive service robots as humanlike, rather than to the extent to which firms design robots as humanlike. According to Epley et al. (2007, p. 865), this perception results from “the attribution of human characteristics or traits to nonhuman agents.”

While marketing has found anthropomorphism to increase product and brand liking (Aggarwal and Gill 2012), whether anthropomorphism in service robots enhances customers’ experiences is unclear. Some scholars argue that perception of humanlike qualities in service robots facilitates engagement with customers, since it “incorporates the underlying principles and expectations people use in social settings in order to fine-tune the social robot’s interaction with humans” (Duffy 2003, p. 181). However, others are more skeptical; as perceived anthropomorphism increases, “consumers will experience discomfort – specifically, feelings of eeriness and a threat to their human identity” (Mende et al. 2019, p. 539). Although scholars have frequently examined the impact of anthropomorphism on customer intention to use a service robot, results are inconsistent, showing positive (Stroessner and Benitez 2019), neutral (Goudey and Bonnin 2016), and negative (Broadbent et al. 2011) effects. Thus, clear management guidelines are lacking, which is unfortunate given firms’ need to “carefully consider how to use AI [artificial intelligence] to engage customers in a more systematic and strategic way” (Huang and Rust 2020, p. 3).

In response to calls by Thomaz et al. (2020) and van Doorn et al. (2017) for more research on when and why customers anthropomorphize service robots and how anthropomorphism influences customer outcomes, the present study uses meta-analysis to enhance understanding of the role of anthropomorphism in influencing customer use intention of service robots. The meta-analysis develops and tests a comprehensive framework to clarify the effects of anthropomorphism on important customer outcomes, assess mediators, identify factors that affect customers’ propensity to anthropomorphize robots, and analyze how contextual factors affect anthropomorphism (Grewal et al. 2018). We thus make several contributions.

First, we synthesize previous research on the relationship between robot anthropomorphism and customer use intention. While one literature stream refers to anthropomorphism theory and suggests that anthropomorphism has positive effects on technology use (Duffy 2003), other literature streams refer either to uncanny valley theory or expectation confirmation theory and argue in favor of negative effects (Ho and MacDorman 2010). Our meta-analysis resolves these inconsistent findings, clarifying whether and under what circumstances customers appreciate anthropomorphism, and whether this relates positively or negatively to technology perception and use intention. The results will guide managers in whether to consider anthropomorphism as a factor influencing robot use.

Second, we examine the mediating mechanisms between service robot anthropomorphism and customer use intention. Considering mediators is vital because it helps scholars avoid overestimating or underestimating the importance of anthropomorphism (Iyer et al. 2011; Qiu and Benbasat 2009), thus supporting a positive effect of anthropomorphism on intention to use. However, this is not always the case. Goudey and Bonnin (2016) found that the anthropomorphism of a companion robot did not increase use intention thereof, and some studies have found that people prefer a less humanlike robot (Broadbent et al. 2011) or an explicitly machinelike robot (Vlachos et al. 2016), suggesting a negative effect of anthropomorphism. This seems to support Mori’s (1970) uncanny valley hypothesis that the use intention of a robot does not always increase with its humanlikeness; people may find a highly humanlike robot creepy and uncanny, and feelings of eeriness or discomfort may lead to rejection. In addition, Goetz et al. (2003) found that although people prefer humanlike robots for social roles, they prefer machinelike robots for more investigative roles, such as lab assistant. These mixed findings indicate the complexity of the relationship between anthropomorphism and use intention, and suggest that the effects of robot anthropomorphism on customer use intention are multi-faceted and contingent. To address this complexity and offer a fuller understanding of this important relationship, we included relevant mediators and moderators in our meta-analysis.

Antecedents of anthropomorphism

We considered two sets of antecedents: customer characteristics and robot design features, since anthropomorphism is not merely the result of a process triggered by an agent’s humanlike features but also reflects customer differences in anthropomorphizing tendencies (Waytz et al. 2014). To select relevant customer characteristics as antecedent variables, we focused on five customer traits and predispositions that have been shown to impact customer use of new technologies: competence, prior experience, computer anxiety, need for interaction, and negative attitudes toward robots (NARS), all of which are technology-related psychological factors. The first four variables come from Epley et al.’s (2007) theory of anthropomorphism, and the last variable is a robot-related general attitude frequently used in HRI research. We also included sociodemographic variables as antecedents. Finally, we included major physical and nonphysical robot design features as antecedents of anthropomorphism.

Traits and predispositions

Competence

Competence can be defined as the customer’s potential to use a service robot to complete a task or performance successfully. It is a multi-faceted construct composed of an individual’s knowledge of and ability to use a robot. It relates to individual factors such as knowledge, expertise, and self-efficacy (Munro et al. 1997). According to Epley et al. (2007), the first of the three psychological determinants of anthropomorphism is elicited agent knowledge; for customers who are knowledgeable about robots, anthropomorphic knowledge and representation are readily accessible and applicable, and therefore they are more likely to humanize the robot. The literature provides limited empirical evidence for a positive effect of competence on anthropomorphism, suggesting that after interacting with or using a robot, people tend to anthropomorphize it more (Fussell et al. 2008). Other studies, however, have found no influence (Ruijten and Cuijpers 2017) or even a negative relationship (Haring et al. 2015). It seems that the more people are capable of using a robot, the lower their anthropomorphic tendency, because there is no need to facilitate the interaction by humanizing the robot.

Prior experience

Prior experience comprises the individual’s opportunity to use a specific technology (Venkatesh et al. 2012). In contrast to competence, robot-related experience implies previous initial contact or interaction with a service robot that does not necessarily include fulfilling a task (MacDorman et al. 2009). The influence of robot-related experience on anthropomorphism is unclear, with contradictory findings. Some studies provide evidence of a positive effect on anthropomorphism (Aroyo et al. 2017), in line with Epley et al. (2007). The elicited agent knowledge in the form of robot-related experience could result in the projection of human attributes to the service robot (Epley et al. 2007). However, several studies indicate a negative effect of experience on anthropomorphism (Haring et al. 2016), or a nonsignificant effect (Stafford 2014).

Computer anxiety

Computer anxiety is the degree of an individual’s apprehension, or even fear, regarding using computers (Venkatesh 2000). Robots are essentially a computer-based technology, and people with different anxiety levels may react differently to robots. According to Epley et al. (2007), the second determinant of anthropomorphism is effectance, the motivation to explain and understand nonhuman agents’ behavior. People high in computer anxiety are more likely to feel a lack of control and uncertain about interacting with a robot, and so their effectance motivation is typically stronger; that is, they have a higher desire to reduce uncertainty by controlling the robot. Anthropomorphism can satisfy this need by increasing someone’s ability to make sense of a robot’s behavior and their confidence in controlling the robot during the interaction (Epley et al. 2008). Thus, anxiety associated with uncertainty should increase the tendency to humanize a robot.

Need for interaction

Like the need to belong and the need for affiliation, the need for interaction is a desire to retain personal contact with others (particularly frontline service employees) during a service encounter (Dabholkar 1996). This relates to the third psychological determinant of anthropomorphism, sociality, which is the need and desire to establish social connections with other humans (Epley et al. 2007). Research indicates that lonely people have a stronger tendency to humanize robots, perhaps because of social isolation, exclusion, or disconnection (Kim et al. 2013). Anthropomorphism can satisfy their need to belong and desire for affiliation by enabling a perceived humanlike connection with robots. Similarly, in a robot service context where social connection with frontline service employees is lacking, customers with a greater need for interaction may compensate and attempt to alleviate this social pain by perceiving a service robot as more humanlike, thus creating a humanlike social interaction (Epley et al. 2008). Therefore, need for interaction should increase customers’ tendency to humanize a service robot.

Negative attitudes toward robots in daily life (NARS)

The concept of NARS (Nomura et al. 2006) captures a general attitude and predisposition toward robots, and is a key psychological factor preventing humans from interacting with robots. While both anthropomorphism and NARS are important constructs in HRI research, their relationship remains understudied and unclear (Destephe et al. 2015). We suggest that NARS may influence anthropomorphism in a similar way to computer anxiety, because both are negative predispositions toward technology (Broadbent et al. 2009). A distinction is important, as computer anxiety is broader (referring to computer technology in general) and emotional (involving fear), whereas NARS is more specific (robot-focused) and attitudinal (involving dislike); nevertheless, the former may lead to the latter (Nomura et al. 2006). Customers with high NARS will feel uncomfortable when interacting with a robot in a service encounter because in general they do not like robots. Hence, in order to facilitate the interaction and improve the service experience, they will tend to anthropomorphize the robot and treat it like a human service employee. We predict a positive influence of NARS on anthropomorphism.

Sociodemographics

Age

In general, age is found to negatively impact people’s willingness to use robots (Broadbent et al. 2009); older people are more skeptical about technology, have more negative attitudes toward robots, and therefore have lower intention to use them. However, a study on healthcare robots found no age effects, suggesting that age need not be a barrier (Kuo et al. 2009). Regarding age influences on anthropomorphism, the literature has focused on children and elderly people, and findings suggest that these segments have a strong tendency to humanize robots (Sharkey and Sharkey 2011). For example, there is evidence that children anthropomorphize nonhuman agents more than adults do (Epley et al. 2007); they tend to ascribe human attributes such as free will, preferences, and emotions even to simple robots, although this tendency decreases with age. There are also indications that people are more likely to anthropomorphize robots as their age increases (Kamide et al. 2013).

Customer gender

Research shows that in general men hold more favorable attitudes toward robotic technologies, tend to perceive robots as more useful, and are more willing to use robots in their daily lives; women are more skeptical about interacting with robots, tend to evaluate them more negatively, and are less likely to use them (de Graaf and Allouch 2013). Therefore, most studies have found that women anthropomorphize robots more strongly than men do (Kamide et al. 2013), perhaps because of high effectance and sociality motivations resulting from technology anxiety or a need for social connection (Epley et al. 2007). Nevertheless, some studies have argued that men tend to perceive a robot as an autonomous person and therefore anthropomorphize robots more compared to women (de Graaf and Allouch 2013). Others have found no gender differences (Athanasiou et al. 2017).

Education

There is a lack of clarity about the effects of an individual’s educational level on their perceptions and evaluations of robots (Broadbent et al. 2009). Evidence that higher education is associated with more positive attitudes toward robots is limited (Gnambs and Appel 2019). Research has yet to examine explicitly whether and how anthropomorphic tendencies vary with educational level. However, anthropomorphism theory suggests that people of modern cultures are more familiar with and knowledgeable about technological devices than those of nonindustrialized cultures (Epley et al. 2007). Since they have greater understanding of how these technological devices work and how to use them, they are less likely to anthropomorphize them. This argument suggests a negative effect of education on anthropomorphism, because people of modern cultures are generally better educated than those of nonindustrialized cultures.

Income

Income is the least examined sociodemographic factor in HRI research. Gnambs and Appel (2019) found that white-collar workers held slightly more favorable attitudes toward robots than blue-collar workers. While there is no direct empirical evidence for the effect of income on anthropomorphism, we suggest that it may be similar to the effect of education, because education and income are highly related and are both indicators of social class. People with higher incomes have more opportunities to interact with innovative technologies such as service robots at work and in their daily lives. They are more capable of using robots, and therefore more likely to acquire nonanthropomorphic representations of robots’ inner workings and less likely to humanize them (Epley et al. 2007).

Robot design

Physical features

It is relatively intuitive that a robot’s physical appearance or embodiment can affect the extent to which it is anthropomorphized. Research has consistently shown that the presence of human features such as head, face, and body increases the perceived humanlikeness of a robot (Erebak and Turgut 2019; Zhang et al. 2010). These physical features serve as observable cues of humanlikeness; hence, the more human features a robot possesses, the more strongly it is anthropomorphized.

Nonphysical features

Nonphysical features mainly refer to robots’ behavioral characteristics, such as gaze, gesture, voice, and mimicry. Research shows that robots with the abilities to make eye contact, use gestures, move, and talk when interacting with people are perceived as more humanlike than those without such abilities, and that the more a robot gazes, gestures, moves, and talks like a human, the more anthropomorphic it is perceived (Kompatsiari et al. 2019; Salem et al. 2013; Zhang et al. 2010). However, this positive effect of behavioral features on anthropomorphism is sometimes found nonsignificant (Ham et al. 2015; Kim et al. 2019). Nonphysical features also include a robot’s emotionality and personality, which also influence people’s anthropomorphic perceptions. For example, Novikova (2016) reported that an emotionally expressive robot was rated significantly higher on anthropomorphism versus a nonemotional robot. Moshkina (2011) found that an extraverted robot was rated as more humanlike than an introverted one.

Mediators of anthropomorphism

To provide a full account of the multi-faceted effects of robot anthropomorphism on customer use intention, we examined three sets of mediators from the literature. First, from HRI research we drew four major robot characteristics as robot-related mediators (Bartneck et al. 2009); to capture the social aspect of a service robot, we also included social presence as a fifth robot characteristic (van Doorn et al. 2017). Second, from technology acceptance research we included usefulness and ease of use as functional mediators (Davis et al. 1989). Robots are essentially a form of technology, and these two variables appear to play key mediating roles in technology acceptance (Blut et al. 2016; Blut and Wang 2020). Third, drawing on the relationship marketing literature, we incorporated five common relational mediators; unlike other forms of technology, relationship-building with robots, especially service robots, is possible and even desired by customers. Thus, we extended Wirtz et al.’s (2018) robot acceptance model by systematically examining robot-related, functional, and relational factors as mediators in the anthropomorphism–use intention relationship. We now discuss the effect of anthropomorphism on each mediator. We will not discuss the effects of mediators on use intention, because they are well-established in the relevant literature on HRI, technology acceptance, and marketing.

Robot-related mediators

Animacy

Animacy is the extent to which a robot is perceived as being alive (Bartneck et al. 2009). Robots high in animacy are lifelike creatures that seem capable of connecting emotionally with customers and triggering emotions. Research often reports a highly positive correlation between anthropomorphism and animacy, suggesting conceptual overlap (Ho and MacDorman 2010) as being alive is an essential part of being humanlike (Bartneck et al. 2009). For example, Castro-González et al. (2018) found that the more humanlike a robot’s mouth is perceived by people, the more alive the robot is rated. Thus, anthropomorphism should positively impact animacy; the more a robot is humanized, the more lifelike the perception. In service contexts, this means that when customers perceive a service robot as more humanlike, they are more likely to feel as if they are interacting with a human service employee rather than a machine.

Intelligence

Intelligence is the extent to which a robot appears to be able to learn, reason, and solve problems (Bartneck et al. 2009). There is evidence that anthropomorphism increases customers’ perceptions of the intelligence of various smart technologies, including robots. Canning et al. (2014) showed that customers perceived humanlike robots as more intelligent than machinelike ones. When people anthropomorphize a robot, they typically treat it as a human being and expect it to exhibit aspects of human intelligence (Huang and Rust 2018). The more humanlike the robot is perceived, the more human intelligence people tend to ascribe to it. In service contexts, this suggests that when customers humanize a service robot, they tend to have higher expectations of its ability to deliver a service.

Likability

Likability is the extent to which a robot gives positive first impressions (Bartneck et al. 2009). Attractiveness is a similar concept, and anthropomorphism can help to make a robot aesthetically appealing and socially attractive. Numerous studies have confirmed a positive effect of anthropomorphism on likeability (Castro-González et al. 2018; Stroessner and Benitez 2019). When people humanize a robot, it becomes more similar to them, which leads to a good first impression (van Doorn et al. 2017). Therefore, the greater the tendency to anthropomorphize a robot, the more people like the robot. In a service context, the positive effect of anthropomorphism on likability means that the humanlikeness of a service robot will enhance first impressions of the robot as a service provider. However, in line with uncanny valley theory (Mori 1970), some studies have found that a robot’s likability does not always increase with anthropomorphism; if it feels uncannily human, people find it unlikable (Mende et al. 2019).

Safety

Safety is the customer’s perception of the level of danger involved in interacting with a robot (Bartneck et al., 2009). It relates to feelings of risk and invasion of privacy. Bartneck et al. (2009) suggested that for someone to use a robot as a partner and coworker, it is necessary to achieve a positive perception of safety. This is especially true for service robots, because customer–robot interaction and co-production are inevitable. According to Epley et al. (2007), anthropomorphism can facilitate perceptions of safety by increasing the sense of the predictability and controllability of the nonhuman agent during interactions, thereby reducing feelings of risk and danger. For example, Benlian et al. (2019) showed that feelings of privacy invasion when using smart home assistants are lower when the technology is anthropomorphized by users. Thus, the literature supports a positive effect of anthropomorphism on perceived safety. In a service context, this suggests that the more a customer perceives a service robot as humanlike, the safer the service experience appears.

Social presence

Social presence is the extent to which a human believes that someone is really present (Heerink et al. 2008). In HRI, social presence is “the extent to which machines (e.g., robots) make consumers feel that they are in the company of another social entity” (van Doorn et al. 2017, p. 44). This robot characteristic can satisfy sociality needs (Epley et al. 2007) and is therefore important for those with a greater need for interaction. The relationship between anthropomorphism and social presence is intuitive and straightforward. By making humans out of robots, people feel that they are interacting with and connecting to another person. Therefore, anthropomorphism evokes a sense of social presence, and literature widely supports this positive effect (Kim et al. 2013). Thus, in a service context, robots that are perceived as more humanlike can provide customers with a stronger social presence, thereby enriching social interaction.

Functional mediators

Ease of use

As a key determinant in the technology acceptance model (TAM), ease of use is the degree to which a customer finds using a technology to be effortless (Davis et al. 1989). With few exceptions (Wirtz et al. 2018), ease of use has not been examined in robot studies. However, research suggests that anthropomorphism makes a robot more humanlike and thus more familiar. Familiarity can help people learn how to use a robot and interact with it more easily, and humanlikeness makes this interaction more natural (Erebak and Turgut 2019); this will increase the perceived ease of use. Hence, a positive effect of anthropomorphism on ease of use is expected. In a service context, this means that customers tend to see a humanlike service robot as easier to work with than a machinelike one. However, empirical analysis is lacking, barring one study that did not support this effect (Goudey and Bonnin 2016).

Usefulness

Defined as the subjective probability that using a technology will improve the way a customer completes a given task (Davis et al. 1989), usefulness is another key determinant in TAM. Epley et al. (2007) suggested that anthropomorphism increases the perceived usefulness of robots in two ways. First, facilitating anthropomorphism can encourage a sense of efficacy that improves interaction with a robot. Second, anthropomorphism can increase the sense of being socially connected to the robot and thus its perceived usefulness. The literature generally supports a positive effect of anthropomorphism on usefulness. Canning et al. (2014) found that people rated humanlike robots higher than mechanical ones on utility, and Stroessner and Benitez (2019) found that humanlike robots were perceived as more competent than machinelike ones. However, Goudey and Bonnin (2016) found this effect to be nonsignificant. In a service context, the positive effect of anthropomorphism on usefulness suggests that customers will have more confidence in the ability of more humanlike robots to provide better services.

Relational mediators

Negative affect

Defined as intense negative feelings directed at someone or something (Fishbach and Labroo 2007), feelings of negative affect a robot may elicit include discomfort such as eeriness, strain, and threat. According to Mori’s (1970) uncanny valley hypothesis, highly humanlike robots generate feelings of eeriness, and people find such robots creepy because uncanny humanlikeness threatens people’s human identity. Therefore, when interacting with highly humanlike robots, people may experience heightened arousal and negative emotions (Broadbent et al. 2011). Research shows that people perceive humanlike robots with greater unease than machinelike robots, and that children may fear highly humanlike robots (Kätsyri et al. 2015). In a service context, Mende et al. (2019) found that customers experienced feelings of eeriness and threat to human identity when interacting with a humanoid service robot and responded more negatively to a robot that was perceived as more humanlike. Therefore, anthropomorphism may not always be desirable, and to avoid causing negative emotions a robot should not be perceived as too humanlike.

Positive affect

Defined as intense positive feelings directed at someone or something (Fishbach and Labroo 2007), feelings of positive affect a robot may elicit include enjoyment, pleasure, and warmth. Marketing research indicates that anthropomorphized products and brands evoke positive emotional responses. Customers view such products and brands as more sociable and are more likely to connect to them emotionally and experience feelings of warmth (van Doorn et al. 2017). Regarding robots, van Pinxteren et al. (2019) found that a robot’s humanlikeness positively influenced customers’ perceived enjoyment. Kim et al. (2019) reported a positive effect of anthropomorphism on pleasure and warmth, suggesting that anthropomorphism enables a humanlike emotional connection with a nonhuman agent. It seems that anthropomorphism can elicit both positive and negative emotions toward a robot, with opposite effects on customer use intention, making this relationship complex.

Rapport

Rapport in this context is the personal connection between a customer and a robot (Wirtz et al. 2018). Building rapport with machines and technologies is often impossible or unnecessary; with service robots, however, it is both possible and desirable (Bolton et al. 2018). This is especially true in services, where rapport (with an employee or robot) is an important dimension in customer experience. Through anthropomorphism, people tend to perceive a robot to be more lifelike and sociable and feel a stronger sense of social connectedness, making emotional attachment to and bonding with the robot more likely. Thus, anthropomorphism facilitates human–robot rapport, making it easier, more desirable, and more meaningful. In a service context, Qiu et al. (2020) found that when customers humanize a service robot, they are more likely to build rapport with it.

Satisfaction

Satisfaction is defined as an affective state resulting from a customer evaluation of a service provided by a company (Westbrook 1987). With few exceptions, satisfaction has not been examined in HRI research. However, given the central role of satisfaction in marketing and its established influence on customers’ behavioral intentions, we include it as a mediator between anthropomorphism and customer intention to use a robot. Our discussion shows that anthropomorphism can improve people’s perceptions (e.g., perceived intelligence), evaluations (e.g., usefulness), and relationships (e.g., rapport) with a robot. Hence, we predict a positive effect of anthropomorphism on satisfaction. However, research also suggests that when a robot is perceived as more humanlike, people tend to treat it as a real person and expect it to show human intelligence. Their expectations regarding the robot’s human capabilities are increased, and they are likely to experience disappointment when the robot fails to meet those expectations (Duffy 2003).

Trust

In a service context, trust is a psychological expectation that others will keep their promises and will not behave opportunistically in expectation of a promised service (Ooi and Tan 2016). Anthropomorphism of service robots may help establishing and increasing trust. When people attribute human capabilities to a nonhuman agent, they tend to believe that the agent is able to perform the intended functions competently. In our context, this means that customers put more trust in the ability of a more humanlike robot to deliver a service. This positive effect of anthropomorphism on trust receives general support in the literature. Waytz et al. (2014) found that people trusted an autonomous vehicle more when it was anthropomorphized, de Visser et al. (2016) showed that a robot’s humanlikeness is associated with greater trust resilience, and van Pinxteren et al. (2019) confirmed that anthropomorphism of a humanoid service robot drove perceived trust in the robot. However, Erebak and Turgut (2019) found no effect of anthropomorphism on trust. Hancock et al.’s (2011) meta-analysis examined the impact of anthropomorphism on trust together with other robot attributes; the reported effect size is rather weak and nonsignificant.

Moderators of the anthropomorphism–use intention relationship

Several studies have examined moderators of the relationship between anthropomorphism and use intention. These studies have considered customer characteristics (the individual’s cultural background and feelings of social power), robot appearance and task, and situational factors (Fan et al. 2016; Kim and McGill 2011; Li et al. 2010). However, most have focused on one study context and only a few types of service robots. The present meta-analysis systematically analyzes two sets of variables that may exert a moderating influence: robot types and service contexts.

First, we focus on robot types that have a large impact on the robot’s overall appearance and behavior, as shown in Table 1. Anthropomorphism represents an important driver for customer decision-making and use intention. However, the importance of anthropomorphism may vary for different robot types since robots may display characteristics and behaviors that amplify or buffer the effect of anthropomorphism. Research in HRI has shown that robot behavior is strongly shaped by design features, in particular by physical embeddedness and morphology (Pfeifer et al. 2007). In addition, the design decision to assign features of more or less obvious gender orientation to a robot can bias perceptions of the robot because of gender-stereoty** (Carpenter et al. 2009). Another powerful design strategy pertains to the level of cuteness or the choice of a zoonotic body form for the robot. Both characteristics can endear the robot to the customer and produce a strong affective bond.

Table 1 Influence of moderators on the anthropomorphism–intention to use relationship

Second, drawing on task–technology fit (TTF) theory we propose that service contexts moderate the relationship between anthropomorphism and intention to use the robot (Table 1; Goodhue and Thompson 1995). TTF suggests that technology has to meet the customer’s requirements when engaging in specific tasks, such as receiving services from a robot. If the technology meets the customer’s needs during service provision (e.g., service robot anthropomorphism), the experience will be more satisfying and the customer more likely to use the technology again (Goetz et al. 2003). We examine five moderators characterizing the service context. We also control for the influence of various method moderators, as shown in Table 1.