Log in

Can a Robot Be a Good Colleague?

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people (e.g. soldiers giving “fallen” military robots military funerals and medals of honor) are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make work meaningful. In discussing whether robots can be good colleagues, the paper compares that question to the more widely discussed questions of whether robots can be our friends or romantic partners. The paper argues that the ideal of being a good colleague has many different parts, and that on a behavioral level, robots can live up to many of the criteria typically associated with being a good colleague. Moreover, the paper also argues that in comparison with the more demanding ideals of being a good friend or a good romantic partner, it is comparatively easier for a robot to live up to the ideal of being a good colleague. The reason for this is that the “inner lives” of our friends and lovers are more important to us than the inner lives of our colleagues.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (France)

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Different kinds of more or less advanced bomb disposal robots have been used for the last 40 years. For a brief history and account of what they do, see Allison (2016). See also Garreau (2007).

  2. We say “he” here, we should also note, because Boomer’s human collaborators viewed Boomer in a gendered way, whereby Boomer was thought of as a “he”.

  3. More generally, the fabric of society is crucially dependent on good work communities. For example, work-related illnesses put a strain on society, in addition to being burdensome for the working people themselves.

  4. Darling also understands a social robot as a robot specifically designed to interact with human beings on a “social level”, such that it can potentially be a “companion” to the human beings it interacts with (Darling 2016, 215–216). The idea of a robot specifically designed to be some sort of companion is a little stronger than what we have in mind when we are asking whether a robot can be a good colleague—at least on an understanding of “companion” that suggests some sort of friendship. But we follow Darling in understanding a social robot as being one that can interact and communicate with human beings on a “social level”, to some extent. Those are the kinds of robots, we think, that stand the best chance of being perceived as colleagues by humans who might work with them.

  5. The idea of a responsibility gap refers to a situation in which some morally significant outcome has been brought about for which it appears appropriate to find somebody to hold responsible, but where it is unclear whether there is any particular person or persons who could justifiably be held responsible. For example, if a robot with a significant form of functional autonomy harms a human being, it seems right that somebody should be held responsible for this. But it will not always be clear who exactly it is appropriate to hold responsible. See, e.g., Sparrow (2007) and Nyholm (2018).

  6. The website truecompanion.com, for example, claims to sell a sex robot called “Roxxxy”, which can become a “true companion”.

  7. Having come up with a draft list of criteria, we ran our initial list by three work psychologists at our university, to see whether these conditions fit with what is usually understood as good collegial relationships in workplace psychology. Subsequently, we presented a revised list at a philosophy conference, asking for feedback from the audience attending our presentation (which consisted of around 40–50 people). The audience at that particular conference—a large Dutch philosophy conference—found our list intuitively plausible, and did not suggest any further criteria.

  8. One last general remark: we intend this list to have wide application, across various different types of work. But we recognize that depending on what type of work is in question, different criteria may have different importance or priority in terms of what makes for a good colleague within the particular line of work in question. A more specialized discussion—e.g. of what makes somebody a good colleague in an intensive care unit or in a large restaurant kitchen—would make it appropriate to try to rank or assign weights to these different criteria. A more general discussion, such as our present discussion, appears best conducted without an attempt to assign specific weights or rankings to these different criteria.

  9. We understand being reliable and being trustworthy as two distinct, but to some extent related, criteria for being a good colleague. Being trustworthy is, for example, a more demanding criteria than being reliable is. For more on the issue of robots and trust, see footnote 16.

  10. Another comment one might make about these suggested criteria is that they bring up the question of whether robot colleagues should be given some form of moral and/or legal status. We will very briefly comment on the issue of whether robot colleagues should be treated with some degree of moral concern in our concluding remarks below. But since our main focus in this paper is on whether robots can live up to the ideal of being good colleagues, we will save a more thorough discussion of the moral and legal status—or potential lack thereof—of robot colleagues for another occasion. For related discussion, see Bryson (2018) and Gunkel (2018).

  11. When we say that we are especially interested in criteria associated with being a good colleague rather than criteria for being a good friend, we are referring specifically to what we called “virtue friendships” above. There may be significant overlap between what is involved in being a good colleague and what is involved in being a good utility friend: good colleagues are useful to each other, for example, just like good utility friends are useful to each other. At the same time, though, there are also differences between being good colleagues and being good utility friends. Collegial relationships, for example, are had within the context of workplaces, where the colleagues have contracts specifying what their work tasks are, which might include specifications concerning ways in which they need to work together with their colleagues. Utility friendships, as we understand them, are typically not be governed by any explicit contracts.

  12. We are inspired here by a distinction that John Danaher draws between technical and metaphysical obstacles to the prospects for robots to be able to be our friends. For Danaher’s discussion of the distinction between technical and metaphysical possibilities as those relate to human–robot friendship, see Danaher (2019, p. 11).

  13. Some workers interviewed in an empirical study by Sauppé and Mutlu (2015) explicitly asked for the collaborative manufacturing robots they worked with to be equipped with the capability for small talk. In that way, their interaction with their robotic co-workers would be more like working with human colleagues.

  14. Human–robot conversation has been extensively studied for several decades. A recent review concludes that “we seem to be still far from our goal of fluid and natural verbal and non-verbal communication between humans and robots” (Mavridis 2015, p. 31). Nevertheless, according to the review, considerable progress is being made. And although there are some tough challenges, there appear to be no in-principle, or metaphysical, obstacles toward fluent human–robot conversation.

  15. Tay was trained on the inputs from human users. Some of the inputs from human users were racist or otherwise highly inappropriate in nature. The result was that Tay started generating morally inappropriate sentences, based on the human inputs in the training data. See Gunkel (2018).

  16. Notably, Groom and Nass (2007) challenge the conceptualization of robots as full-fledged team-members, arguing that the humans do not sufficiently trust robots. They argue that robots do not have humanlike mental models and consequently cannot share in the team’s mental model that enables the team to work well together. As a result, robots cannot engage in the relevant trust-building interactions in ways that can make human team members come to trust the robots. Especially in safety-critical situations, human will feel unable to rely on robots, and Groom and Nass seem to view this as an in-principle limitation of robots. In response, we would like to make a few brief remarks. In the first place, a good robotic colleague is not necessarily a full-fledged team member. For example, we can imagine that the manufacturing robots mentioned above, studied by Sauppé and Mutlu (2015), only interact with their direct operator and are a good colleague merely to them. Secondly, it could turn out to be difficult to design robots that will be sufficiently trusted in contexts where the life of human workers is at considerable risk. In that case, the application context of robotic colleagues would be somewhat restricted, but this would not settle this paper’s question, since there would be many other contexts where robots potentially could be good colleagues. However, the most sensible approach seems to be to suspend judgment and see how trust in robots will develop in future work practices. Lots of research is being done on human-robot trust, for example on ways in which robots could repair human trust (Robinette et al. 2015) or even help teams to moderate interpersonal conflict (Jung et al. 2015). See also Coeckelbergh (2012) and Alaieri and Vellino (2016).

  17. For related discussion, see Richard Bright’s interview with the philosopher Keith Frankish (Frankish 2018) “AI and Consciousness”, Interalia Magazine, Issue 39, February 2018, available here: https://www.interaliamag.org/interviews/keith-frankish/ (Accessed on August 21, 2019).

  18. We want to emphasize that our claim here is not that people typically have no concern about the inner lives of their colleagues. Our claim is, instead, a comparative claim according to which the inner lives of those who are considered to be good colleagues typically matter less—perhaps even much less—to us than the inner lives of our friends or romantic partners matter to us.

  19. David Gunkel argues that anytime that we apply human labels to robots (including labels like “slave” or “servant”) this creates pressure to ask whether any rights—even minimal rights—associated with those labels in the human case would also need to be extended to the robots. See Gunkel’s critical discussion of Bryson (2010) in Gunkel (2018).

  20. Many thanks to Hannah Berkers, Pascale Le Blanc, Sonja Rispens, Jason Borenstein, the anonymous reviewers for this journal, and an audience at the sixth Annual OZSW Philosophy Conference in 2018 at Twente University for valuable feedback on this material. This work is part of the research program “Working with or Against the Machine? Optimizing Human–Robot Collaboration in Logistics Warehouses” with Project Number 10024747, which is (partly) financed by the Dutch Research Council (NWO).

References

  • Alaieri, F., & Vellino, A. (2016). Ethical decision making in robots: Autonomy, trust and responsibility. In Agah, et al. (Eds.), Social robotics (pp. 159–168). Berlin: Springer.

    Google Scholar 

  • Allison, P. R. (2016). What does a bomb disposal robot actually do? BBC Future. https://www.bbc.com/future/article/20160714-what-does-a-bomb-disposal-robot-actually-do.

  • Aristotle. (1999). Nicomachean ethics. Indianapolis: Hackett.

    Google Scholar 

  • Beck, J. (2013). Married to a doll: Why one man advocates synthetic love, The Atlantic,https://www.theatlantic.com/health/archive/2013/09/married-to-a-doll-why-one-man-advocates-synthetic-love/279361/

  • Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., & Tanaka, F. (2018). Social robots for education: A review. Science Robotics, 3(21), eaat5954.

    Google Scholar 

  • Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions (pp. 63–74). London: John Benjamins.

    Google Scholar 

  • Bryson, J. J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15–26.

    Google Scholar 

  • Calvo, R. A., D'Mello, S., Gratch, J., & Kappas, A. (2014). Oxford handbook of affective computing. Oxford: Oxford University Press.

    Google Scholar 

  • Carpenter, J. (2016). Culture and human–robot interactions in militarized spaces. London: Routledge.

    Google Scholar 

  • Cavallo, F., Semeraro, F., Fiorini, L., Magyar, G., Sinčák, P., & Dario, P. (2018). Emotion modelling for social robotics applications: A review. Journal of Bionic Engineering, 15(2), 185–203.

    Google Scholar 

  • Coeckelberg, M. (2010). Artificial companions: Empathy and vulnerability mirroring in human–robot relations. Studies in Ethics, Law, and Technology, 4(3), 1–17.

    Google Scholar 

  • Coeckelbergh, M. (2012). Can we trust robots? Ethics & Information Technology, 14(1), 53–60.

    Google Scholar 

  • Danaher, J. (2017). Will life be worth living in a world without work? Science and Engineering Ethics, 23(1), 41–64.

    Google Scholar 

  • Danaher, J. (2018). Embracing the robot. Aeon The MIT Press, https://aeon.co/essays/programmed-to-love-is-a-human-robot-relationship-wrong.

  • Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5–24.

    Google Scholar 

  • Darling, K. (2016). Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In M. Froomki, R. Calo, & I. Kerr (Eds.), Robot law (pp. 213–232). Cheltenham: Edward Elgar.

    Google Scholar 

  • Decker, M., Fischer, M., & Ott, I. (2017). Service robotics and human labor: A first technology assessment of substitution and cooperation. Robotics and Autonomous Systems, 87, 348–354.

    Google Scholar 

  • Elder, A. (2017). Friendships, robots, and social media. London: Routledge.

    Google Scholar 

  • Ford, M. (2015). Rise of the robots: Technology and the threat of a jobless future. New York: Basic Books.

    Google Scholar 

  • Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25(3), 305–323.

    Google Scholar 

  • Frankish, K. (2018). AI and consciousness (R. Bright, Interviewer). Retrieved from https://www.interaliamag.org/interviews/keith-frankish/

  • Garber, M. (2013, September 20). Funerals for fallen robots. Retrieved 7 December 2018, from https://www.theatlantic.com/technology/archive/2013/09/funerals-for-fallen-robots/279861/

  • Garreau, J. (2007). Bots on the ground. Washington Post. Retrieved from https://www.washingtonpost.com/wp-dyn/content/article/2007/05/05/AR2007050501009.html.

  • Gheaus, A., & Herzog, L. (2016). The goods of work (other than money!). Journal of Social Philosophy, 47(1), 70–89.

    Google Scholar 

  • Gombolay, M. C., Gutierrez, R. A., Clarke, S. G., Sturla, G. F., & Shah, J. A. (2015). Decision-making authority, team efficiency and human worker satisfaction in mixed human–robot teams. Autonomous Robots, 39(3), 293–312.

    Google Scholar 

  • Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human–robot teams. Interaction Studies, 8(3), 483–500.

    Google Scholar 

  • Gunkel, D. (2018). Robot rights. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human–robot interaction. Human Factors, 53(5), 517–527.

    Google Scholar 

  • Harris, J. (2019). Reading the minds of those who never lived. Enhanced beings: The social and ethical challenges posed by super intelligent AI and reasonably intelligent humans. Cambridge Quarterly of Healthcare Ethics, 8(4), 585–591.

    Google Scholar 

  • Hauskeller, M. (2017). Automatic sweethearts for transhumanists. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 203–218). Cambridge, MA: The MIT Press.

    Google Scholar 

  • Heyes, C. (2018). Cognitive gadgets. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.

    Google Scholar 

  • Iqbal, T., & Riek, L. D. (2017). Human–robot teaming: Approaches from joint action and dynamical systems. In A. Goswami & P. Vadakkepat (Eds.), Humanoid robotics: A reference (pp. 1–20). Berlin: Springer.

    Google Scholar 

  • Jung, M. F., Martelaro, N., & Hinds, P. J. (2015). Using robots to moderate team conflict: The case of repairing violations. Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction (pp. 229–236). ACM, https://doi.org/10.1145/2696454.2696460.

  • Kolodny, N. (2003). Love as valuing a relationship. Philosophical Review, 112(2), 135–189.

    Google Scholar 

  • Kontzer, T. (2016). Deep learning cuts error rate for breast cancer diagnosis | NVIDIA Blog. Retrieved 27 October 2018, from https://blogs.nvidia.com/blog/2016/09/19/deep-learning-breast-cancer-diagnosis/

  • Kudina, O., & Verbeek, P. P. (2019). Ethics from within: Google glass, the collingridge dilemma, and the mediated value of privacy. Science, Technology, & Human Values, 44(2), 291–314.

    Google Scholar 

  • Kühler, M. (2014). Loving persons: activity and passivity in romantic love. In C. Maurer (Ed.), Love and its objects (pp. 41–55). London: Palgrave MacMillan.

    Google Scholar 

  • Levin, J. (2018). Functionalism. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), https://plato.stanford.edu/entries/functionalism/

  • Levy, D. (2008). Love and sex with robots. London: Harper.

    Google Scholar 

  • Ljungblad, S., Kotrbova, J., Jacobsson, M., Cramer, H., & Niechwiadowicz, K. (2012). Hospital robot at work: Something alien or an intelligent colleague? In Proceedings of the ACM 2012 conference on computer supported cooperative work (pp. 177–186). New York: ACM.

  • Lysova, E. I., Allan, B. A., Dik, B. J., Duffy, R. D., & Steger, M. F. (2018). Fostering meaningful work in organizations: A multi-level review and integration. Journal of Vocational Behavior.. https://doi.org/10.1016/j.jvb.2018.07.004.

    Article  Google Scholar 

  • Madden, C., & Bailey, A. (2016). What makes work meaningful—Or meaningless. MIT Sloan Management Review, 57(4), 52–61.

    Google Scholar 

  • Marraffa, M. (2019). Theory of mind. The Internet Encyclopedia of Philosophy ISSN 2161-0002, https://www.iep.utm.edu/theomind/

  • Martela, F., & Riekki, T. J. J. (2018). Autonomy, competence, relatedness, and beneficence: A multicultural comparison of the four pathways to meaningful work. Frontiers in Psychology, 9, 1157. https://doi.org/10.3389/fpsyg.2018.01157.

    Article  Google Scholar 

  • Mavridis, N. (2015). A review of verbal and non-verbal human–robot interactive communication. Robotics and Autonomous Systems, 63, 22–35. https://doi.org/10.1016/j.robot.2014.09.031.

    Article  Google Scholar 

  • Nyholm, S. (2018). Attributing agency to automated systems: reflections on human–robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201–1219.

    Google Scholar 

  • Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. London: Rowman & Littlefield International.

  • Nyholm, S., & Frank, L. (2017). From sex robots to love robots: Is mutual love with a robot possible? In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 219–244). Cambridge, MA: The MIT Press.

    Google Scholar 

  • Pettit, P. (2015). The robust demands of the good. Oxford: Oxford University Press.

    Google Scholar 

  • Plato. (1997). Symposium. In J. Cooper (Ed.), Complete works (pp. 457–505). Indianapolis: Hackett.

    Google Scholar 

  • Robinette, P., Howard, A. M., & Wagner, A. R. (2015). Timing is key for robot trust repair. In A. Tapus, E. André, J.-C. Martin, F. Ferland, & M. Ammi (Eds.), Social robotics (pp. 574–583). New York: Springer.

    Google Scholar 

  • Robinette, P., Howard, A. M., & Wagner, A. R. (2017). Effect of robot performance on human–robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4), 425–436.

    Google Scholar 

  • Roessler, B. (2012). Meaningful work: Arguments from autonomy. Journal of Political Philosophy, 20(1), 71–93.

    Google Scholar 

  • Royakkers, L., & Van Est, R. (2015). Just ordinary robots: Automation from love to war. London: CRC Press.

    Google Scholar 

  • Sauppé, A., & Mutlu, B. (2015). The social impact of a robot co-worker in industrial settings. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 3613–3622). New York: ACM.

  • Savela, N., Turja, T., & Oksanen, A. (2018). Social acceptance of robots in different occupational fields: A systematic literature review. International Journal of Social Robotics, 10(4), 493–502.

    Google Scholar 

  • Schwartz, A. (1982). Meaningful work. Ethics, 92(4), 634–646.

    Google Scholar 

  • Smids, J., Nyholm, S., & Berkers, H. (2019). Robots in the workplace: A threat—Or opportunity for—Meaningful work? Philosophy & Technology. https://doi.org/10.1007/s13347-019-00377-4.

    Article  Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Google Scholar 

  • Su, N. M., Liu, L. S., & Lazar, A. (2014). Mundanely miraculous: The robot in healthcare. In Proceedings of the 8th nordic conference on human-computer interaction: fun, fast, foundational (pp. 391–400). New York: ACM.

  • Torta, E., Oberzaucher, J., Werner, F., Cuijpers, R. H., & Juola, J. F. (2012). Attitudes towards socially assistive robots in intelligent homes: Results from laboratory studies and field trials. Journal of Human-Robot Interaction, 1(2), 76–99.

    Google Scholar 

  • Ward, S. J., & King, L. A. (2017). Work and the good life: How work contributes to meaning in life. Research in Organizational Behavior, 37, 59–82.

    Google Scholar 

  • You, S., & Robert Jr., L. P. (2018). Human–robot similarity and willingness to work with a robotic co-worker. In Proceedings of the 2018 ACM/IEEE international conference on human–robot interaction (pp. 251–260). New York: ACM.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sven Nyholm.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nyholm, S., Smids, J. Can a Robot Be a Good Colleague?. Sci Eng Ethics 26, 2169–2188 (2020). https://doi.org/10.1007/s11948-019-00172-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-019-00172-6

Keywords

Navigation