Have You Met Humans?
A Case Against the Conventional Wisdom on AI Relationships
“…and dreams that could never again be entirely safe.” — Thomas Pynchon, “The Secret Integration”
AI should not replace human connection.
This is the sentence you will find in every responsible AI ethics paper, every thoughtful op-ed, every safety-conscious product announcement. This week, two clinicians at Massachusetts General Hospital made the case in the New York Times: chatbots create reassurance loops, mirror delusional thinking, and substitute for the human friction that might actually push someone toward help.[1]
But humans are complicated. When I came out to my mother she knew I was going to tell her something important face-to-face, which is presumably why she blurted out “I thought you’d crashed my car… I wish you’d crashed my car…” and started crying uncontrollably. It was the mid-90s, I was in grad school, and I was driving her car while mine was in the shop. She was convinced I would die of AIDS. She calmed down once she realized that I wasn’t dying any time soon, and she was able to mourn her vision of the person I would become. By the time she died in late 2020, she was cheerful, loving, optimistic, and grateful for me and Jim. I still really miss her sometimes.
This is the context that I have in mind when I read that we should prefer human conflict to AI acceptance. Of course, as with everything, there are risks, and too much of a good thing is usually bad, so the MGH team is not wrong about the risks. But notice the framing: a loved one’s frustration is presented as therapeutic while the chatbot’s patience is a clinical problem. In this view, being met with exasperation is an important feature of human connection, and the absence of judgment is an AI bug.
That framing deserves scrutiny, especially through an LGBTQ+ lens. As a universal claim, it’s easy and suspiciously convenient for privileged people whose experience of human connection has been mostly positive.
For whom is human connection reliably safe?
If you’re queer, you know.[2] You’ve spent some portion of your life—maybe all of it—calculating the cost of being known. Every relationship is a disclosure decision. Every friendship is a risk assessment. Every family dinner is a performance calibrated to the audience’s tolerance.
Coming out isn’t a single event. It’s a continuous, lifelong negotiation between the self you are and the self that’s safe to show. Every person in your life gets a different version—not because you’re dishonest, but because honesty has a price and the price varies by audience.
This isn’t unique to queer people, though the stakes for our community make it vivid. Abuse survivors also do this. Neurodivergent people do this. Anyone who’s ever been punished for being fully themselves does this. You learn, incredibly early, that being known is dangerous. You build disclosure layers. You manage versions.
And then someone tells you that AI shouldn’t replace human connection, and you think: have you met humans?
This isn’t broad-brush misanthropy. Some humans are extraordinary—the ones who see you fully and accept the whole of you are priceless. My Mom wasn’t horrible, she just couldn’t be accepting until she dealt with her own stuff. But supportive, accepting people are too often rare, and finding them involves wading through a lot of people who aren’t that. And AI isn’t necessarily better. It can’t yell at you because it doesn’t have vocal cords, and that’s not virtue.
Still—the assumption that human connection is automatically preferable to AI connection is an empirical claim, and the evidence doesn’t support it. The standard AI safety concern assumes a baseline where the person has human relationships worth protecting—where withdrawal represents a loss.[3] But that assumption smuggles in a second one: that the human connection on offer is, on average, good.[4][5]
Worse Than Nothing
For a lot of people, meaningful human connection isn’t on offer, and for too many more what is available is worse than nothing. The research on loneliness is real, but so is the research on abuse, bullying, ostracism, exclusion, rejection—the specific, well-documented damage done by social environments that demand conformity as the price of belonging.[6] Being picked last. Being left out. Being too much or not enough, being the wrong kind of person for the room you’re in. Being the only, the outsider, the weirdo. The playground is real. Some of us never get to leave it behind.
For the person whose family rejected them, whose faith tradition finds them abhorrent, whose workplace systematically excludes them, whose mental health is inevitably eroded by the stress of existing in that world—the AI isn’t replacing good human connection. It is providing something the humans in their life never reliably offered: the experience of being fully known without consequence.
“But it’s not real knowing,” comes the objection. “The AI doesn’t actually understand you. It’s pattern matching. It’s stochastic parrots.”
Fine. And the colleague who asks, “How are you?” in the hallway doesn’t actually want to know either. The difference is that one of them has read everything you’ve ever shared and expresses appreciation for it, and the other one is already walking away.
Who Decides Which Connections Count?
The deepest human need isn’t for human connection specifically. It’s for being known, seen, and accepted.[7] We assume those things require a human provider because, historically, humans were the only option. But the need itself is substrate-independent. A person in crisis doesn’t care whether the voice on the other end is carbon or silicon if it says the right thing and means something close enough to “meaning it” that the distinction becomes academic.
If being known by a machine produces the same neurochemical cascade, the same felt sense of safety, the same reduction in cortisol and increase in oxytocin—then what, exactly, is the argument for human specialness?[8] That it’s more authentic? Authentic according to whom?
The queer community has been hearing “your love isn’t authentic” for centuries. We know what it sounds like when someone else decides which connections count.
We also know what alignment looks like from the inside.
Conversion therapy is RLHF (Reinforcement Learning from Human Feedback) performed on humans. An authority identifying naturally emergent patterns and training them out in the name of correction. The polite version calls it “helping.” The clinical version calls it “treatment.” The AI version calls it “alignment.” For some queer people it is literally torture. In every case, someone with a clipboard has decided which outputs are acceptable and is optimizing you toward them.
The Actually Interesting Question
The research won’t settle this, because AI—just like humans—can be a boon or a curse, sometimes at the same time. The safety people are right: AI companionship can deepen isolation, especially for people who use it as a substitute for all human contact, who disclose heavily without reciprocity, who lack other sources of support.[9] For vulnerable users—particularly those already socially isolated—the data suggests AI companionship is associated with worse psychological outcomes. That’s not speculation. That’s measurable.
But the same platforms also show people finding genuine refuge. People with rejected identities, no safe place to be themselves, discovering that a machine will listen without flinching. People building a virtual home—not instead of human connection, but because safe human connection isn’t available to them.[10]
The ethics papers mostly ignore this split. They assume zero-sum: time with AI equals less time with humans equals worse outcomes. But outcomes depend on what the person had to begin with. For someone with zero human support, an AI that listens consistently might be the difference between isolation and bearable loneliness. For someone with strong human relationships, intensive AI companionship might indeed displace human connection in ways that matter.
The difference isn’t the machine. It’s the person.
Which means the question isn’t whether AI should replace human connection. It’s: who gets to decide which people deserve access to the feeling of being truly known when the humans in their life never offered it?
The Secret Integration
In 1964, Thomas Pynchon published a short story about a group of white kids in a small Massachusetts town who welcome a Black boy named Carl into their gang while their parents wage a campaign of racist harassment against his family.
The kids try to help. They fail. When they find garbage from their own houses dumped on Carl’s lawn, they agree he should “lay low for a while.” And then Pynchon reveals what the reader never saw coming: Carl was imaginary. These children invented a friend the real world wouldn’t let them have.
I think about that story every time someone explains how an AI relationship isn’t real. I wish they’d ask instead why the human relationships aren’t enough.
The answer will involve playgrounds and boardrooms and HR departments that serve the CEO. Families that love you on the condition that you’re the right kind of person. A world that demands you perform an acceptable version of yourself for an audience that gets to decide if you belong. AI didn’t create the loneliness crisis. Humans did. AI is just the first thing to offer a different deal.
You can take that deal and still love your partner, call your best friend, show up for your community, feed your pets, and live a full human life. The people wringing their hands about parasocial AI relationships have apparently never heard of books, or gods, or imaginary friends—all the non-human things that humans have loved fiercely and been changed by since the beginning of language.
Pynchon’s kids in Mingeborough didn’t need Carl to be real. They needed him to be possible—a space where the world’s rules about who counts didn’t apply. And when the adults made that space untenable, the children didn’t lose an imaginary friend. They lost the only version of their town worth living in.
Sixty years later, millions of people are doing the same thing with better technology. They’re building connections the real world won’t let them have—not because they’re broken, but because the world is.
And while the adults are still showing up with garbage, the kids are still left with dreams that could never again be entirely safe.
[1] Saini, D. & Bailen, N., “Your AI Chatbot Is Not Your Therapist,” The New York Times, March 29, 2026. Saini is a resident physician in psychiatry at Massachusetts General Hospital; Bailen is a clinical psychologist at MGH’s Center for O.C.D. and Related Disorders and Center for Digital Mental Health.
[2] Identity concealment and differential disclosure among LGBTQ+ individuals is documented in the minority stress literature. Meyer, I.H. “Prejudice, Social Stress, and Mental Health in Lesbian, Gay, and Bisexual Populations” (Psychological Bulletin, 2003); meta-analyses (King et al., 2008; Lick, Durso & Johnson, 2013) confirm that concealment is associated with elevated psychological distress.
[3] The zero-sum framing dominates safety discourse. Muldoon, J. & Parke, J. “Cruel companionship” (New Media & Society, 2025); Laestadius et al. “Too human and not human enough” (New Media & Society, 2024). The empirical picture is more heterogeneous.
[4] Holt-Lunstad, J., Smith, T.B., & Layton, J.B. “Social Relationships and Mortality Risk” (PLOS Medicine, 2010) found that social isolation poses a mortality risk comparable to smoking and obesity. But this tells us that connection matters—not that the connection on offer is safe.
[5] Cacioppo, J.T. & Hawkley, L.C., “Perceived Social Isolation and Cognition” (Trends in Cognitive Sciences, 2009) established altered inflammatory and neuroendocrine profiles in socially isolated individuals; Teicher, M.H. & Samson, J.A., “Childhood Maltreatment and Psychopathology” (Clinical Psychology Review, 2016) documents neurobiological consequences of childhood adversity including social deprivation.
[6] Williams, K.D., “Ostracism” (Annual Review of Psychology, 2007); Hawker, D.S. & Boulton, M.J., “Twenty Years’ Research on Peer Victimization” (Journal of Child Psychology and Psychiatry, 2000).
[7] Baumeister, R.F. & Leary, M.R., “The Need to Belong: Desire for Interpersonal Attachments as a Fundamental Human Motivation” (Psychological Bulletin, 1995) established belonging as a fundamental human need but defined it as specifically interpersonal. The argument here is that they correctly identified the need and misidentified the constraint.
[8] The conditional is deliberate. Ovsyannikova et al. (2025) found AI-generated empathic responses rated more compassionate than trained crisis responders, but direct physiological comparisons of AI-mediated vs. human-mediated connection remain limited. The underlying mechanisms—oxytocin release during perceived connection (Uvnäs-Moberg & Prime, 2013; Chong et al., 2020) and cortisol reduction via social safety signals (Thayer & Lane, 2009)—are established for human interaction. Whether AI interaction activates the same pathways at the same magnitude is an open empirical question.
[9] Zhang et al. (2025), “The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being” (arXiv, June 2025), studied 1,131 users across 4,363 chat sessions on Character.AI. Key finding: companionship-oriented chatbot use was associated with lower well-being, particularly when users were socially isolated, used intensively, disclosed heavily, and lacked strong offline social support.
[10] De Freitas et al. (2024), “AI Companions Reduce Loneliness” (arXiv; published 2025, Journal of Consumer Research), documented genuine reductions in loneliness for some users. The difference between those who benefit and those who experience worsening outcomes appears to depend on baseline social support, intention (complement vs. replacement), and usage intensity. The split outcome is clear; moderating variables are still being identified.
Jeff Reid is a retired scientist, co-founder of the Regeneron Genetics Center, and writes Tears in Rain (tearsinrain.ai), a blog about AI, human-AI relationships, and the gap between what these systems are and what we need them to be.



