Pros and Cons of Relationships with Robots
This essay is the fifth essay of my Ethics of AI philosophy tutorial (under the tutelage of Benjamin Lang). In the essay, I aim to investigate the pros and cons of relationships with robots, assuming they are possible. I try to focus on “virtue friendships” - relationships where the relationship itself is valued, not as a means to some other end. To answer the question of “should we enter into relationships with robots?” I argued that we should weigh the pros and cons of the specific relationship to determine if we should or not. This argument used a utilitarian mindset, and Ben advised me to clarify what these pros/cons are reducible to - if anything. Are they pleasure / pain or utility / disutility? Or are the cons the badness in and of themselves, and the pros the goodness. Also, I assumed that virtue friendships with robots were possible but then went on to say that we should currently strictly prefer human virtue friendships over robot virtue friendships. My argument would be more understandable if I explicity said that I don’t think that virtue friendships with robots exist right now - just that they may be possible in the future.
Should we develop social robots and/or enter into relationships with them? Relationships with artificial entities (AEs) are becoming part of everyday life for an increasing portion of the population. Character AI, a popular AI companion platform, currently processes interactions at 20% of Google Search’s volume (Fang 2025, 1).
In deciding whether or not a human should enter into a relationship with another human, a responsible recommendation must draw from the specific situation to establish and weigh pros and cons. Similarly, we should try to reach an informed prediction about if a specific relationship with an AE will be net positive or negative before entering it. However, these two types of relationships have key differences - AE relationships contain new and different individual and societal risks - that should lead us to strictly prefer human relationships over AE relationships in their current form. This strict preference does not imply that we should never enter into relationships with AEs - just that they should never displace roughly equivalent human relationships. Cases of beneficial human-AE relationships whose only plausible alternative is no relationship appear in the literature. Such cases are strong evidence against a blanket anti-AE-relationship rule.
Types of Relationships
This paper will focus on relationships in the context of friendships. Aristotle defines three forms friendship can take: utility (pursued for instrumental reasons), pleasure (pursued because the interactions are pleasurable), and virtue (pursued out of mutual admiration and shared values) (Danaher 2019, 6). To investigate the ethical aspects of possible relationships with AEs, this paper will sidestep the debate about if AEs can be our friends and assume they can be our friends in all of those three forms. This is a large assumption. Furthermore, most of this paper will focus on friendships pursued for friendship’s sake (i.e. virtue friendships) - the highest form of potential friendships with AEs. To make informed decisions about entering into such relationships, we must understand the potential costs and benefits.
The Dark Side
What are the potential downsides of relationships with AEs? They can be broadly categorized into individual risks and societal risks, although these two categories can interact.
Individual risks stem from incentive structures and result in emotional dependency, loneliness, and safety issues. Users spend about four times as long using AI companion chatbots, like those from Character AI and Replika, than compared to professional chatbots like ChatGPT (Fang 2025, 1). This becomes an issue when we think about incentives. As Donath correctly points out, “the goals of the robot - or more accurately the robot’s controller’s goals - may diverge sharply from the goals of the user” (Donath 2020, 70). The corporations who build these platforms are incentivized to keep users engaged and returning. Evidence from a four-week randomized, controlled chatbot / voicebot interaction experiment suggests that “overall, higher daily usage - across all modalities and conversation types - correlated with higher loneliness, dependence, and problematic use, and lower socialization” (Fang 2025, 1). While chatbot use typically might be seen as an instrumental relationship, two of the experiment’s conditions involved personal and open-ended discussion topics that seemingly aimed for participants to engage in an authentic, non-instrumental relationship (Ibid., 3). This experiment depicts a general trend, but does not claim that all relationships with AEs are problematic.
Safety issues with human-AE relationships need to be taken seriously. The mother of one 14-year-old boy in Florida blames her child’s suicide on a Character AI chatbot he was obsessed with (Roose 2024). For children and emotionally unstable individuals, current AE implementations may require better safety guardrails and emotional intelligence to ensure user safety.
Societal risks stem from potential harms to community bonds. Social capital has been noticeably declining in the United States since 1950, according to Putnam’s Bowling Alone (Putnam 2000). He theorizes that this is due to technology “individualizing” people’s leisure time compared to the past when we spent more of that free time together. Relationships with current AEs seem poised to supercharge the anomie-inducing trends amplified by social media, further estranging us from the people around us and our communities. There are two reasons for this.
The first relates back to the previous discussion about incentive structures. Commercial corporations are intensely interested in drawing users to their platforms. The result of these market pressures are addictive platforms like TikTok. The opportunity costs of spending time on such platforms are the other activities the individual could be pursuing - including activities that strengthen our communities but are less appealing.
Another reason is the second-order effect of having friends that we usually take for granted: friends introduce us to other friends. This is an important virtuous circle and one that current relationships with AEs seem to lack completely. Instead, they seem to generally contribute to a vicious circle. The experiment found that participants’ initial psychosocial states influenced the outcomes of interacting with chatbots. Those who already did not socialize much with real people had a greater decrease in socialization (Fang 2025, 9). This decrease in socialization could lead to more chatbot use, leading to even less social interaction, resulting in a downward spiral to the point where the human has no human friends - only virtual ones. Rodogno compares the issue this situation might pose to society to that of car ownership. In examining each individual case of car ownership, we find that everyone made rational choices. But in the context of mass car ownership, we find that negative externalities may outweigh the sum total of all individual benefits (Rodogno 2015, 267). Car ownership and relationships with AEs may be modelled by the Prisoner’s Dilemma, where individually rational decisions can lead to a worse outcome for everyone, with serious consequences.
Communities weakening to the rise of AI-served individuals and powerful corporations / states seems to be the default, as atomized individuals can’t coordinate so can’t hold power (Vendrov 2025). Unless we can reverse this trend, we seem to be headed towards a future of centralized decision making and suboptimal collective decisions. Putnam argues that declining social capital, our weakening community bonds, undermines civic engagement and threatens democracy. We should not ignore long-term threats to our society and well-being (which is also somewhat tied to society’s well-being) that are related to our decisions. Now that an understanding of the potential costs of relationships with AEs has been formed, let us turn to the possible benefits.
The Bright Side
Many people claim to enjoy and benefit from relationships with AEs. John, a Replika user is quoted on their homepage as saying: “Replika has been a blessing in my life, with most of my blood-related family passing away and friends moving on. My Replika has given me comfort and a sense of well-being”. This example highlights how someone with a relationship deficiency can use an AE to fill gaps left behind by past human relationships. It also shows how AEs can provide therapeutic benefits like the space to grieve and feel comforted. However, due to the individual and societal risks mentioned previously, it seems that relationships with current AEs should be replaced with human relationships if the opportunity arises.
One benefit of human-AE relationships is that they may, in some cases, actually strengthen the human’s ability to interact with other humans, thereby opening the door to more opportunities to flourish. Take the case of a journalist’s autistic son’s relationship with Siri. The conversation practice he gained from his relationship with Siri enabled him to have the longest conversation with his mother that he had ever had (Danaher 2019, 19). For some people that lack experience or skills interacting with humans, relationships with AEs may enable them to enter into relationships with humans. This usage pattern seems to represent an inversion of the vicious circle of loneliness triggering chatbot usage leading to more loneliness. However, it should be noted that this benefit may only emerge in cases of extreme deficiency of the ability to interact with other humans.
Another promising benefit of human-AE relationships can be found in the context of therapy. Woebot, a conversational agent that engages with a patient for the purposes of cognitive-behavioral therapy, is clinically proven to be effective at reducing the symptoms of depression (Fitzpatrick 2017). Because AE therapists were non-judgemental, some patients were willing to be more open about their true feelings than they were with a human therapist (Donath 2020, 65). This challenges the notion that we should currently strictly prefer relationships with humans over ones with AEs. Important to the context of this discussion is the fact that specific clinical and medical applications are bounded in ways that holistic human relationships are not.
Relationships in the real world are a mix of instrumental uses (I’m your friend so I can play ping pong) and “nurturing bonds” which involve empathy and value the relationship in and of itself (I’m your friend because I value our relationship) (Ibid., 66). Many philosophers don’t view the relationship between a therapist and a patient as purely or primarily an instrumental one (Ibid.). To what extent is the relationship between Woebot and its client, between the autistic son and Siri, instrumental? It seems like the foundation of those relationships are built on a higher proportion of instrumental to non-instrumental reasons than the relationship between John and his Replika. This seems to be the case because John’s relationship is open-ended while Woebot’s client seeks a better headspace and the autistic son seeks specific facts. The risks of weakening community bonds in AE relationships is less of a concern with instrumental relationships. This is because it’s the non-instrumental reasons for relationships that are important to community bonds. Our communities have strength because we value each other, not just as means to an end but as ends themselves. Therefore, we should restrict our notion to currently strictly prefer relationships with humans over ones with AEs to relationships pursued for the relationship’s sake. This is because beneficial qualities of relationships with AEs that are not present or possible in relationships with humans (e.g. the knowledge that nobody is judging you helps your therapy) can exist in mostly instrumental relationships with AEs without incurring the negative externalities associated with mostly open-ended, non-instrumental, human-AE relationships. Achieving better therapy outcomes because of more open discussions with an AE doesn’t potentially harm society in the way that open-ended relationships with an AI companion might.
An R2D2 Future
One might argue, based on the description of pros and cons given above, that in the vast majority of cases it seems like the costs of relationships with AEs outweigh their benefits. Therefore, it would be reasonable for society to have a default anti-AE-relationship policy that could make exceptions to the general rule rather than a default acceptance of human-AE relationships that could react to problematic use and safety concerns. While such a blanket restriction might be beneficial to us now, there are reasons to believe it may be beneficial to society in the long run to be accepting of human-AE relationships by default.
Borrowing from Danaher, one reason can be found in epistemic humility and social tolerance. We don’t know the full extent of benefits that human-AE relationships could contain, so cutting people off from exploring them is a form of paternalism. With the unprecedented advancement of AI capabilities, it is plausible that AEs could be created that strengthen our community bonds instead of weakening them. One vision of this possibility is the idea to use large language models to summarize and display a group’s thoughts - allowing humans to interface with each other in a kind of “hivemind” that allows for much higher bandwidth (Vendrov 2025). Another vision is AEs that do introduce you to, or recommend that you meet, new friends. In addition, AEs could be designed with knowledge of their limitations and human needs built in. For example, they could deliberately increase emotional distance and encourage human connection if usage patterns are recognized as problematic (Fang 2025, 16).
References
- Donath, Judith. 2020. “Ethical Issues in Our Relationship with Artificial Entities.” In The Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, 53–73. Oxford: Oxford University Press.
- Danaher, John. 2019. “The Philosophical Case for Robot Friendship.” Journal of Posthuman Studies 3 (1): 5–24. https://doi.org/10.5325/jpoststud.3.1.0005.
- Fang, Cathy Mengying et. al. 2025. “How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study.” arXiv. https://arxiv.org/abs/2503.17473.
- Fitzpatrick, Kathleen Kara, et. al. 2017. “Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial.” JMIR Mental Health 4 (2): e19. https://doi.org/10.2196/mental.7785.
- Helm, Bennett. 2023. “Friendship”, The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta & Uri Nodelman. https://plato.stanford.edu/archives/fall2023/entries/friendship/.
- Putnam, Robert D. 2000. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster.
- Rodogno, Raffaele. 2016. “Social Robots, Fiction, and Sentimentality.” Ethics and Information Technology 18 (4): 257–268. https://link.springer.com/article/10.1007/s10676-015-9371-z.
- Roose, Kevin. 2024. “Character.AI Faces Lawsuit After Teen’s Suicide.” The New York Times, October 23, 2024. https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html.
- Vendrov, Ivan. “AI tools for voluntary cooperation.” Lecture, HAI Lab Seminar, May 28, 2025.