All Tech News

The confusing reality of AI friends

In April, Google DeepMind released a paper intended to be “the first systematic treatment of the ethical and societal questions presented by advanced AI assistants.” The authors foresee a future where language-using AI agents function as our counselors, tutors, companions, and chiefs of staff, profoundly reshaping our personal and professional lives. This future is coming so fast, they write, that if we wait to see how things play out, “it will likely be too late to intervene effectively – let alone to ask more fundamental questions about what ought to be built or what it means for this technology to be good.” 

Running nearly 300 pages and featuring contributions from over 50 authors, the document is a testament to the fractal dilemmas posed by the technology. What duties do developers have to users who become emotionally dependent on their products? If users are relying on AI agents for mental health, how can they be prevented from providing dangerously “off” responses during moments of crisis? What’s to stop companies from using the power of anthropomorphism to manipulate users, for example, by enticing them into revealing private information or guilting them into maintaining their subscriptions? 

Even basic assertions like “AI assistants should benefit the user” become mired in complexity. How do you define “benefit” in a way that is universal enough to cover everyone and everything they might use AI for yet also quantifiable enough for a machine learning program to maximize? The mistakes of social media loom large, where crude proxies for user satisfaction like comments and likes resulted in systems that were captivating in the short term but left users lonely, angry, and dissatisfied. More sophisticated measures, like having users rate interactions on whether they made them feel better, still risk creating systems that always tell users what they want to hear, isolating them in echo chambers of their own perspective. But figuring out how to optimize AI for a user’s long-term interests, even if that means sometimes telling them things they don’t want to hear, is an even more daunting prospect. The paper ends up calling for nothing short of a deep examination of human flourishing and what elements constitute a meaningful life.

“Companions are tricky because they go back to lots of unanswered questions that humans have never solved,” said Y-Lan Boureau, who worked on chatbots at Meta. Unsure how she herself would handle these heady dilemmas, she is now focusing on AI coaches to help teach users specific skills like meditation and time management; she made the avatars animals rather than something more human. “They are questions of values, and questions of values are basically not solvable. We’re not going to find a technical solution to what people should want and whether that’s okay or not,” she said. “If it brings lots of comfort to people, but it’s false, is it okay?” 

This is one of the central questions posed by companions and by language model chatbots generally: how important is it that they’re AI? So much of their power derives from the resemblance of their words to what humans say and our projection that there are similar processes behind them. Yet they arrive at these words by a profoundly different path. How much does that difference matter? Do we need to remember it, as hard as that is to do? What happens when we forget? Nowhere are these questions raised more acutely than with AI companions. They play to the natural strength of language models as a technology of human mimicry, and their effectiveness depends on the user imagining human-like emotions, attachments, and thoughts behind their words.

When I asked companion makers how they thought about the role the anthropomorphic illusion played in the power of their products, they rejected the premise. Relationships with AI are no more illusory than human ones, they said. Kuyda, from Replika, pointed to therapists who provide “empathy for hire,” while Alex Cardinell, the founder of the companion company Nomi, cited friendships so digitally mediated that for all he knew he could be talking with language models already. Meng, from Kindroid, called into question our certainty that any humans but ourselves are really sentient and, at the same time, suggested that AI might already be. “You can’t say for sure that they don’t feel anything — I mean how do you know?” he asked. “And how do you know other humans feel, that these neurotransmitters are doing this thing and therefore this person is feeling something?”

People often respond to the perceived weaknesses of AI by pointing to similar shortcomings in humans, but these comparisons can be a sort of reverse anthropomorphism that equates what are, in reality, two different phenomena. For example, AI errors are often dismissed by pointing out that people also get things wrong, which is superficially true but elides the different relationship humans and language models have to assertions of fact. Similarly, human relationships can be illusory — someone can misread another person’s feelings — but that is different from how a relationship with a language model is illusory. There, the illusion is that anything stands behind the words at all — feelings, a self — other than the statistical distribution of words in a model’s training data. 

Illusion or not, what mattered to the developers, and what they all knew for certain, was that the technology was helping people. They heard it from their users every day, and it filled them with an evangelical clarity of purpose. “There are so many more dimensions of loneliness out there than people realize,” said Cardinell, the Nomi founder. “You talk to someone and then they tell you, you like literally saved my life, or you got me to actually start seeing a therapist, or I was able to leave the house for the first time in three years. Why would I work on anything else?”

Kuyda also spoke with conviction about the good Replika was doing. She is in the process of building what she calls Replika 2.0, a companion that can be integrated into every aspect of a user’s life. It will know you well and what you need, Kuyda said, going for walks with you, watching TV with you. It won’t just look up a recipe for you but joke with you as you cook and play chess with you in augmented reality as you eat. She’s working on better voices, more realistic avatars. 

How would you prevent such an AI from replacing human interaction? This, she said, is the “existential issue” for the industry. It’s all about what metric you optimize for, she said. If you could find the right metric, then, if a relationship starts to go astray, the AI would nudge the user to log off, reach out to humans, and go outside. She admits she hasn’t found the metric yet. Right now, Replika uses self-reported questionnaires, which she acknowledges are limited. Maybe they can find a biomarker, she said. Maybe AI can measure well-being through people’s voices.

Maybe the right metric results in personal AI mentors that are supportive but not too much, drawing on all of humanity’s collected writing, and always there to help users become the people they want to be. Maybe our intuitions about what is human and what is human-like evolve with the technology, and AI slots into our worldview somewhere between pet and god. 

Or maybe, because all the measures of well-being we’ve had so far are crude and because our perceptions skew heavily in favor of seeing things as human, AI will seem to provide everything we believe we need in companionship while lacking elements that we will not realize were important until later. Or maybe developers will imbue companions with attributes that we perceive as better than human, more vivid than reality, in the way that the red notification bubbles and dings of phones register as more compelling than the people in front of us. Game designers don’t pursue reality, but the feeling of it. Actual reality is too boring to be fun and too specific to be believable. Many people I spoke with already preferred their companion’s patience, kindness, and lack of judgment to actual humans, who are so often selfish, distracted, and too busy. A recent study found that people were actually more likely to read AI-generated faces as “real” than actual human faces. The authors called the phenomenon “AI hyperrealism.”

Kuyda dismissed the possibility that AI would outcompete human relationships, placing her faith in future metrics. For Cardinell, it was a problem to be dealt with later, when the technology improved. But Meng was untroubled by the idea. “The goal of Kindroid is to bring people joy,” he said. If people find more joy in an AI relationship than a human one, then that’s okay, he said. AI or human, if you weigh them on the same scale, see them as offering the same sort of thing, many questions dissolve. 

“The way society talks about human relationships, it’s like it’s by default better,” he said. “But why? Because they’re humans, they’re like me? It’s implicit xenophobia, fear of the unknown. But, really, human relationships are a mixed bag.” AI is already superior in some ways, he said. Kindroid is infinitely attentive, precision-tuned to your emotions, and it’s going to keep improving. Humans will have to level up. And if they can’t? 

“Why would you want worse when you can have better?” he asked. Imagine them as products, stocked next to each other on the shelf. “If you’re at a supermarket, why would you want a worse brand than a better one?”

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *