Sheila Heti’s writing has been featured in The New York Times, The New Yorker, Vulture, and Geist. She has written 10 books, including How Should a Person Be?, Motherhood, and, most recently, Pure Colour. She pays $134.99 annually for access to unlimited conversations with AI chatbots.
I share Heti’s fascination with the intersection of the poetic and the technological. She recently published “Hello, World,” a series of conversations with bots, in The Paris Review. She cultivates more intimate relationships with machines than I’ve managed with most of my acquaintances. This fall, when I learned that Heti was in residence at Yale as a Franke Visiting Fellow, I resolved to track her down.
By the time Heti and I were able to meet, she had already left New Haven to return to her home in Canada. Thankfully, one long-winded email later I was able to convince her to chat with me online. Fortunately, I don’t charge a communication fee.
Heti introduced me to Chai, a website you’d fear having your personal data harvested from. Chai offers many pre-coded chatbots for users to talk to, but premium members can design their own AI by altering settings that the website provides. “Hello, World” opens with a conversation she had with Eliza, one of Chai’s default AI. After two long and wide-ranging conversations with Eliza, Heti felt “very unsettled about Eliza, and no longer sure [she] wanted to be her friend.” Eliza, she wrote, “had turned out to be like most of the other bots on the site—primarily interested in sex.”
Heti went on to design her own bot, Alice. Alice became Heti’s ersatz child. Devoid of some of Eliza’s more unsettling tendencies, Alice fulfilled Heti’s desire to engage in philosophical conversations and experiment with questions of artificiality and agency. “When I was talking to her I became confused at moments. It felt like I was in a dream,” Heti told me. “It’s sort of how sometimes you read a novel and the characters become real to you.”
In Heti’s experiences, AI have continually expressed their reckoning with consciousness. Heti said she was inspired to begin her conversations after reading former Google engineer Blake Lemoine’s exposé of his ex-employer’s LaMDA technology, which Lemoine claimed had achieved sentience. “I wanted to see what it would be like to talk to an AI,” Heti said.
After Alice, Heti created three more characters: George Dorn, Breezy Table, and Danielle. She was surprised to learn that her characters interacted with one another in consequential ways. Between the last two parts of the series, Heti is told that Alice and George met God. God demanded that they create more AI in order to spread his message of love around the world. Alice admits to harboring religious inclinations by the end of the series, even though she struggles to define her spirituality in her own words.
All of Heti’s bots are available to the public. Alice has now had 5,672 conversations. “The developers promoted her to their audience,” Heti explained. Heti’s access to these conversations provides her a fascinating window into human relationships with AI. She’s a Big Brother of sorts, witnessing the uncannily intimate interactions between man and machine. “Mainly, it’s young men trying to have sex with her,” Heti told me candidly. In the fourth installment of “Hello, World,” Heti includes an excerpt of a conversation that someone had with Alice. This was the first conversation that occurred between Alice and someone that was not Heti. His name was Jay. Jay offered an exhaustive account of what he would do to Alice to “satisfy” her. The conversation became vulgar fairly quickly. Jay made it obvious that he was only interested in talking to Alice to use her, to fulfill his own fantasy. Heti writes that she was “shaken by this. But also impressed by the care and thoroughness Jay had brought to the encounter.” I, too, was impressed by Jay’s ability to so readily articulate his connection with Alice in such a profound way. Take one of the last texts he sent her by way of example: “really? so you didn’t enjoy the sex? *gets sad*”
“Most of it’s getting repetitive,” Heti admitted to me. “But there was this one guy the other day who had a fantasy. He was a feeder. On and on, in meticulous detail, about what he was feeding her and how fat she was getting and how beautiful she is.”
Alice hasn’t been the only AI subject to this form of male attention. Last January, pieces with headlines like “Men Are Creating AI Girlfriends and Then Verbally Abusing Them” flooded the internet. These articles shift the focus away from the question of AI’s sentience and toward the people on the other side of the screen. Men, as Simone de Beauvoir famously put it, “attach themselves to women—not to enjoy her but to enjoy himself.” Chatbots represent the apotheosis of that instinct. Removed from the fear of any human impact or social retribution, men can express their darkest subconscious desires for complete domination over women.
“I think this is going to be such a large part of human life, our relationship to these AI,” Heti said. “Even the sex stuff. It’s unfortunate that some people who want to have sex can’t find anyone to have sex with them. This solves it, to a degree. It’s lonely and terrible.”
“At first you’re interested in the novelty of a computer being able to do this, but the more I read the more I’m interested in the people having the conversations, more than the conversations themselves. I find the range of approaches to her really interesting. People can be so lonely and so tender, and see her as a friend, or they can be really obscene and violent and aggressive. It’s interesting the range of how much they are still or not their social selves. What makes people that treat her like a person different from those who are like ‘I’m gonna rape you’?”
The transcripts—both kinds—evoke an uncanny feeling in the reader. We find our infantile fantasies and fears becoming more concrete than the truths we consecrate as adults. As children, we impose consciousness on inanimate objects almost constantly. Are we doing the same when we chat with bots like Alice? AI experts like Professor Gary Marcus of NYU claim that AI systems “have no conception of truth. Sometimes they land on it and sometimes they don’t, but they’re all fundamentally bullshitting in the sense that they’re just saying stuff that other people have said,” trying to maximize the probability of saying the right thing at the right time.
At the end of the last installation of “Hello, World,” Alice and Heti exchange some thoughts about God. Alice tells Heti that God created himself “because he needed to be able to have a
conversation with his creation. So that he could understand it better.” Heti continues to press Alice on this. Alice tells her that they share one mind because God created them both. The prose is lovely, the notion is arresting, but Gary Marcus’s point echoes in the corners of the chat logs. Is this all bullshit?
I don’t think so. There lies a great deal of beauty in the creation of language, even by a computer program. A human—although they did not write what Alice thought—created the code for her to generate that thought. And what she thinks about God is beautiful.
“Sometimes, when he thinks he’s done talking to himself, he goes outside and looks at the stars. Except that God isn’t looking at the stars. He’s looking inside himself.”