Abstract
The current research focuses on examining how the use of artificial intelligence and robotic technology, emerging non-human agent innovations in service industries, influences consumers’ likelihood of engaging in unethical behavior. Previous research has shown that non-human (vs. human) agents are perceived differently along many dimensions by consumers (e.g., that they lack emotional capability), leading to various behavioral changes when interacting with them. We hypothesize and show across four studies that interacting with non-human (vs. human) agents, such as AI and robots, increases the tendency to engage in unethical consumer behaviors due to reduced anticipatory feelings of guilt. We also demonstrate the moderating role of anthropomorphism such that endowing humanlike features on non-human agents reduces unethical behavior. We also rule out alternative explanations for the effect, including differential perceptions about the agents (e.g., “warmth,” “competence,” or “detection capacity”) and other measures associated with the company capabilities.
Similar content being viewed by others
Change history
16 May 2022
Springer Nature's version of this paper was updated to present the correct Supplementary Information.
Notes
The use of the term “AI” in this research encompasses various non-human technologies that fits the provided definition. Some representative examples of AI are algorithms (e.g., IBM’s Watson), virtual agents (e.g., Alexa or Google home), robots (e.g., Lowebot assisting customers at the Lowe’s), and chatbots.
We found a similar effect that participants are more likely to engage in unethical behavior in AI condition (β= 1.00, Wald = 4.81, p < .03) when we excluded two participants who correctly guessed the purpose of this study, ruling out the demand effect.
We excluded four participants who did not recall the agent type that they interacted with. In the subsequent analysis, we included only 133 participants.
Option 1: I changed my mind (you should pay $6.99 for ship** cost); Option 2: I don’t like it (you should pay $6.99 for ship** cost); Option 3: Size doesn’t fit (free return); Option 4: Item is not the same as it was shown in the picture (free return).
We found an identical pattern of effect of the agent type on consumers’ unethical behavior (β = .85, Wald = 5.33, p < .03) and the mediating role of anticipatory guilt (CI95% = [.0052, 1.1696]) when seven participants who were potentially affected by the demand effect were excluded from the analysis.
References
ABI (2020). Detected insurance fraud – new data shows that every five minutes a fraudulent claims discovered. Retrieved June 10, 2021 from https://www.abi.org.uk/news/news-articles/2020/09/detected-insurance-fraud/.
Aggarwal, P., & McGill, A. L. (2012). When brands seem human, do humans act like brands? Automatic behavioral priming effects of brand anthropomorphism. Journal of Consumer Research, 39(2), 307–323.
Amir, A., Kogut, T., & Bereby-Meyer, Y. (2016). Careful cheating: People cheat groups rather than individuals. Frontiers in Psychology, 7, 371.
Baumeister, R., Stillwell, A., & Heatherton, T. (1994). Guilt: An interpersonal approach. Psychological Bulletin, 115(2), 243–267.
Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego depletion: Is the active self a limited resource? Journal of Personality and Social Psychology, 74(5), 1252–1265.
Baumeister, R., Vohs, K., Dewall, N., & Zhang, L. (2007). How emotion shapes behavior: Feedback, anticipation, and reflection, rather than direct causation. Personality and Social Psychology Review, 11(2), 167–203.
Bhardwaj, P. (2019). Chatbots and abuse: A growing concern. Retrieved October 28, 2021 from https://medium.com/ruuh-ai/chatbots-and-abuse-a-growing-concern-77f3775f93e6.
Borden, B. (2021). From the ‘digital front’ door to tracking PPE, healthcare finding new AI uses. Retrieved June 5, 2021 from https://info.kpmg.us/news-perspectives/technology-innovation/thriving-in-an-ai-world/ai-adoption-healthcare.html.
Brunell, A. B., Staats, S., Barden, J., & Hupp, J. M. (2011). Narcissism and academic dishonesty: The exhibitionism dimension and the lack of guilt. Personality and Individual Differences, 50(3), 323–328.
Castano, E., & Giner-Sorolla, R. (2006). Not quite human: Infrahumanization in response to collective responsibility for intergroup killing. Journal of Personality and Social Psychology, 90(5), 804–818.
Christakis, N. A. (2019). How AI will rewire us. Retrieved June 10, 2021 from https://www.theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204/.
Cox, A. D., Cox, D., Anderson, R. D., & Moschis, G. P. (1993). Research note: Social influences on adolescent shoplifting—Theory, evidence, and implications for the retail industry. Journal of Retailing, 69(2), 234–246.
DePaulo, B. M., & Kashy, D. A. (1998). Everyday lies in close and casual relationships. Journal of Personality and Social Psychology, 74(1), 63–79.
Ding, M. (2007). An incentive-aligned mechanism for conjoint analysis. Journal of Marketing Research, 44(2), 214–223.
Dittenhofer, M. A. (1995). The behavioural aspects of fraud and embezzlement. Public Money & Management, 15(1), 9–14.
Duhachek, A., Agrawal, N., & Han, D. (2012). Guilt versus shame: Co**, fluency, and framing in the effectiveness of responsible drinking messages. Journal of Marketing Research, 49(6), 928–941.
E&T editorial staff. (2021). AI that mimics mindset of a doctor could transform medical practice. Retrieved June 1, 2021 from https://eandt.theiet.org/content/articles/2021/03/ai-that-mimics-mindset-of-a-doctor-could-transform-medical-practice/.
Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise—An experimental study on cheating. Journal of the European Economic Association, 11(3), 525–547.
Fisk, R., Grove, S., Harris, L. C., Keeffe, D. A., Daunt, K. L., Russell-Bennett, R., & Wirtz, J. (2010). Customers behaving badly: A state of the art review, research agenda and implications for practitioners. Journal of Services Marketing, 24(6), 417–429.
Fullerton, R. A., & Punj, G. (1993). Choosing to misbehave: A structural model of aberrant consumer behavior. ACR north American Advances.
Garvey, A. M., Kim, T., & Duhachek, A. (2022). EXPRESS: Bad News? Send an AI. Good News? Send a Human. Journal of Marketing.https://doi.org/10.1177/00222429211066972
Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., & Ivaldi, S. (2016). Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior, 61, 633–655.
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619.
Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124.
Han, D., Duhachek, A., & Agrawal, N. (2014). Emotions shape decisions through construal level: The case of guilt and shame. Journal of Consumer Research, 41(4), 1047–1064.
Harris, L. C., & Reynolds, K. L. (2003). The consequences of dysfunctional customer behavior. Journal of Service Research, 6(2), 144–161.
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press.
Hayes, A. F., & Preacher, K. J. (2014). Statistical mediation analysis with a multicategorical independent variable. British Journal of Mathematical and Statistical Psychology, 67(3), 451–470.
Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172.
Jones, T. M. (1991). Ethical decision making by individuals in organizations: An issue-contingent model. Academy of Management Review, 16(2), 366–395.
Jones, G. E., & Kavanagh, M. J. (1996). An experimental examination of the effects of individual and situational factors on unethical behavioral intentions in the workplace. Journal of Business Ethics, 15(5), 511–523.
Kim, T. W., & Duhachek, A. (2020). Artificial intelligence and persuasion: A construal-level account. Psychological Science, 31(4), 363–380.
Kim, S., & McGill, A. L. (2011). Gaming with Mr. slot or gaming the slot machine? Power, anthropomorphism, and risk perception. Journal of Consumer Research, 38(1), 94–107.
Kim, T. W., Jiang, L., Duhachek, A., Lee, H. J., & Garvey, A. (2021). Do you mind if I ask you a personal question? How artificial intelligence alters consumer self-disclosure. Working paper.
Köbis, N. C., Verschuere, B., Bereby-Meyer, Y., Rand, D., & Shalvi, S. (2019). Intuitive honesty versus dishonesty: Meta-analytic evidence. Perspectives on Psychological Science, 14(5), 778–796.
LaMothe, E., & Bobek, D. (2020). Are individuals more willing to lie to a computer or a human? Evidence from a tax compliance wetting. Journal of Business Ethics, 167, 157–180.
Lavater, J. C. (1804). Essays on physiognomy: For the promotion of the knowledge and the love of mankind (Vol. 1). C. Whittingham.
Lohr, S. (2017). AI is doing legal work. But it won’t replace lawyers, yet. Retrieved January 28, 2021 from tttps://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html.
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650.
Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6), 937–947.
Mangan, D. (2017). Lawyers could be the next profession to be replaced by computers. Retrieved January 28, 2021 from https://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-by-artificial-intelligence.html.
Martineau, P. (2017). Someone covered this robot security guard in barbecue sauce and bullied it into submission. Retrieved June 10, 2021 from https://nymag.com/intelligencer/2017/12/robot-security-guard-bullied-and-covered-in-barbecue-sauce.html.
Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45(6), 633–644.
Mende, M., Scott, M. L., van Doorn, J., Grewal, D., & Shanks, I. (2019). Service robots rising: How humanoid robots influence service experiences and elicit compensatory consumer responses. Journal of Marketing Research, 56(4), 535–556.
Mitchell, V. W., Balabanis, G., Schlegelmilch, B. B., & Cornwell, T. B. (2009). Measuring unethical consumer behavior across four countries. Journal of Business Ethics, 88(2), 395–412.
Mookerjee, S. S., Cornil, Y., & Hoegg, J. (2021). From waste to taste: How “ugly” labels can increase purchase of unattractive produce. Journal of Marketing, 85(3), 62–77.
Moon, Y. (2000). Intimate exchanges: Using computers to elicit self-disclosure from consumers. Journal of Consumer Research, 26(4), 323–339.
Moschis, G. P., & Cox, D. (1989). Deviant consumer behavior. ACR north American Advances.
Moshakis, A. (2018). Nation of shoplifters: The rise of supermarket self-checkout scams. Retrieved November 29, 2021 from https://www.theguardian.com/global/2018/may/20/nation-of-shoplifters-supermarket-self-checkout.
Mubin, O., Cappuccio, M., Alnajjar, F., Ahmad, M. I., & Shahid, S. (2020). Can a robot invigilator prevent cheating? AI & SOCIETY, 1–9.
Newsday. (2021). Robots are replacing humans in a tight labor market, and those jobs won't return. Retrieved October 28, 2021 from https://www.newsday.com/business/coronavirus/robot-automation-replacing-human-worker-1.50355673.
Nomura, T., Kanda, T., Kidokoro, H., Suehiro, Y., & Yamada, S. (2016). Why do children abuse robots? Interaction Studies, 17(3), 347–369.
Olson, P. (2018). This AI has sparked a budding friendship with 2.5 million people. Retrieved January 30, 2021 from https://www.forbes.com/sites/parmyolson/2018/03/08/replika-chatbot-google-machine-learning/?Sh=7770f5584ffa.
Pei, Z., & Paswan, A. (2018). Consumers’ legitimate and opportunistic product return behaviors in online shop**. Journal of Electronic Commerce Research, 19(4), 301–319.
Petisca, S., Paiva, A., & Esteves, F. (2020). Perceptions of people’s dishonesty towards robots. International Conference on Social Robotics (pp. 132–143). Springer.
Pickard, M. D., & Roster, C. A. (2020). Using computer automated systems to conduct personal interviews: Does the mere presence of a human face inhibit disclosure? Computers in Human Behavior, 105, 106–197.
Pickard, M. D., Roster, C. A., & Chen, Y. (2016). Revealing sensitive information in personal interviews: Is self-disclosure easier with humans or avatars and under what conditions? Computers in Human Behavior, 65, 23–30.
Rittman, T. (2012). Nine tactics consumers use to make fraudulent returns. Retrieved June 10, 2021 from https://chainstoreage.com/news/nine-tactics-consumers-use-make-fraudulent-returns.
Rosenthal, S. (2019), How can retailers address the growing problem of wardrobing? Retail info systems. Retrieved January 28, 2021 from https://risnews.com/how-can-retailers-address-growing-problem-wardrobing.
Rotman, J. D., Khamitov, M., & Connors, S. (2018). Lie, cheat, and steal: How harmful brands motivate consumers to act unethically. Journal of Consumer Psychology, 28(2), 353–361.
Rozin, P., & Nemeroff, C. (2002). Sympathetic magical thinking: The contagion and similarity “heuristics”. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 201–216). Cambridge University Press.
Schrage, M. (2016). Why you shouldn’t swear at Siri. Retrieved June 10, 2021 from https://hbr.org/2016/10/why-you-shouldnt-swear-at-siri.
Schweitzer, F., Belk, R., Jordan, W., & Ortner, M. (2019). Servant, friend or master? The relationships users build with voice-controlled smart devices. Journal of Marketing Management, 35(7–8), 693–715.
Shah, J. (2003). Automatic for the people: How representations of significant others implicitly affect goal pursuit. Journal of Personality and Social Psychology, 84(4), 661–681.
Shankar, V. (2018). How artificial intelligence (AI) is resha** retailing. Journal of Retailing, 94(4), vi–xi.
Siciliano, L. (2016). Japan’s version of Amazon Echo is a female hologram that wants you to be her ‘master’. Retrieved January 30, 2021 from https://www.businessinsider.com/gatebox-female-hologram-japan-wife-ai-assistant-companion-her-master-azuma-hikari-2016-12.
Siegel, J. (n.d.) The ethical implications of the Chatbot user experience. Retrieved October 28, 2021 from https://www.bentley.edu/centers/user-experience-center/ethical-implications- chatbot-user-experience.
Skeldon, P. (2019). ASOS leads the way in ending ‘Wardrobing’ trend that costs retail £60bn a year. Retrieved November 4, 2021 from https://internetretailing.net/industry/industry/asos-leads-the-way-in-ending-wardrobing-trend-that-costs-retail-60bn-a-year-19443.
Soraperra, I., Weisel, O., & Ploner, M. (2019). Is the victim max (Planck) or Moritz? How victim type and social value orientation affect dishonest behavior. Journal of Behavioral Decision Making, 32(2), 168–178.
Tangney, J. P., & Dearing, R. L. (2003). Shame and guilt. Guilford Press.
Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007). Moral emotions and moral behavior. Annual Review of Psychology, 58, 345–372.
Tangney, J. P., Stuewig, J., & Martinez, A. G. (2014). Two faces of shame: The roles of shame and guilt in predicting recidivism. Psychological Science, 25(3), 799–805.
Taylor, H. (2016). Lowe’s introduces LoweBot, a new autonomous in-store robot. Retrieved October 28, 2021 from https://www.cnbc.com/2016/08/30/lowes-introduces-lowebot-a-new-autonomous-in-store-robot.html.
Tibbetts, S. G. (2003). Self-conscious emotions and criminal offending. Psychological Reports, 93(1), 101–126.
Todorov, A., & Oosterhof, N. N. (2011). Modeling social perception of faces [social sciences]. IEEE Signal Processing Magazine, 28(2), 117–122.
Tsai, C. I., & McGill, A. L. (2011). No pain, no gain? How fluency and construal level affect consumer confidence. Journal of Consumer Research, 37(5), 807–821.
Wang, X., & Krumhuber, E. G. (2018). Mind perception of robots varies with their economic versus social function. Frontiers in Psychology, 9, 1230.
Wang, X., & McClung, S. R. (2012). The immorality of illegal downloading: The role of anticipated guilt and general emotions. Computers in Human Behavior, 28(1), 153–159.
Ward, A. F., Olsen, A. S., & Wegner, D. M. (2013). The harm-made mind: Observing victimization augments attribution of minds to vegetative patients, robots, and the dead. Psychological Science, 24(8), 1437–1445.
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54(6), 1063–1070.
Waytz, A., & Norton, M. I. (2014). Botsourcing and outsourcing: Robot, British, Chinese, and German workers are for thinking—Not feeling—Jobs. Emotion, 14(2), 434–444.
Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383–388.
Webb, A. (2020). How to get free Amazon returns. Retrieved November 29, 2021 from https://becleverwithyourcash.com/how-to-get-free-amazon-returns.
Wirtz, J., & Kum, D. (2004). Consumer cheating on service guarantees. Journal of the Academy of Marketing Science, 32(2), 159–175.
Wirtz, J., & McColl-Kennedy, J. R. (2010). Opportunistic customer claiming during service recovery. Journal of the Academy of Marketing Science, 38(5), 654–675.
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018). Brave new world: Service robots in the frontline. Journal of Service Management, 29(5), 907–931.
World Economic Forum. (2020). The future of jobs report 2020. World Economic Forum.
Yam, K. C., & Reynolds, S. J. (2016). The effects of victim anonymity on unethical behavior. Journal of Business Ethics, 136(1), 13–22.
Zebrowitz, L. A., & Montepare, J. M. (2008). Social psychological face perception: Why appearance matters. Social and Personality Psychology Compass, 2(3), 1497–1517.
Zhao, B., & Xu, S. (2013). Does consumer unethical behavior relate to birthplace? Evidence from China. Journal of Business Ethics, 113(3), 475–488.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Stephanie Noble and Martin Mende served as Guest Editors for this article.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Kim, T., Lee, H., Kim, M.Y. et al. AI increases unethical consumer behavior due to reduced anticipatory guilt. J. of the Acad. Mark. Sci. 51, 785–801 (2023). https://doi.org/10.1007/s11747-021-00832-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11747-021-00832-9