(2025-07-28) ZviM AI-Companion Piece
Zvi Mowshowitz: AI-Companion Piece. AI companions, other forms of personalized AI content and persuasion and related issues continue to be a hot topic.
companions are used mostly not used for romantic relationships or erotica, although perhaps that could change. How worried should we be about personalization maximized for persuasion or engagement?
Table of Contents
- Persuasion Should Be In Your Preparedness Framework.
- Personalization By Default Gets Used To Maximize Engagement.
- Companion.
- Goonpocalypse Now.
- Deepfaketown and Botpocalypse Soon.
Persuasion Should Be In Your Preparedness Framework
Kobi Hackenburg leads on the latest paper on AI persuasion.
Kobi Hackenberg: RESULTS (pp = percentage points):
Scale increases persuasion, +1.6pp per OOMPost-training more so, +3.5ppPersonalization less so, <1ppInformation density drives persuasion gainsIncreasing persuasion decreased factual accuracy
Conversations with AI are more persuasive than reading a static AI-generated message
Personalization By Default Gets Used To Maximize Engagement
We need to be on notice for personalization effects on persuasion growing larger over time, as more effective ways of utilizing the information are found.
The memory features can be persistent in more ways than one. But in our testing, we found that these settings behaved unpredictably – sometimes deleting memories on request, other times suggesting a memory had been removed, and only when pressed revealing that the memory had not actually been scrubbed but the system was suppressing its knowledge of that factoid
The bigger problem is that the incentives are to push this much farther
One could also fire back that a lot of this is good, actually. Consider this argument:
AI companies’ visions for all-purpose assistants will also blur the lines between contexts that people might have previously gone to great lengths to keep separate: If people use the same tool to draft their professional emails, interpret blood test results from their doctors, and ask for budgeting advice, what’s to stop that same model from using all of that data when someone asks for advice on what careers might suit them best? Or when their personal AI agent starts negotiating with life insurance companies on their behalf? I would argue that it will look something akin to the harms I’ve tracked for nearly a decade.
in general from the user’s perspective I don’t see why we should presume they are worse off
Wouldn’t the user want this kind of discrimination to the extent it reflected their own real preferences? You can make a few arguments why we should object anyway.
I notice that I am by default not sympathetic to any of those arguments. If (and it’s a big if) we think that the system is optimizing as best it can for user preferences, that seems like something it should be allowed to do
The arguments I am sympathetic to are those that say that the system will not be aligned to the user or user preferences, and rather be either misaligned or aligned to the AI developer, doing things like maximizing engagement and revenue at the expense of the user. At that point we should ask if Capitalism Solves This because users can take their business elsewhere, or if in practice they can’t or won’t, including because of lock-in from the history of interactions or learning details, especially if this turns into opaque continual learning rather than a list of memories that can be copied over
Almost all evaluations and tests are run on unpersonalized systems. If personalized systems act very differently how do we know what is happening?
This might be the real problem. We have a hard enough time getting minimal testing on default settings. It’s going to be a nightmare to test under practical personalization conditions
Companion
So how is this companion thing going in practice? Keep in mind selection effects.
Common Sense Media (what a name): New research: AI companions are becoming increasingly popular with teens, despite posing serious risks to adolescents, who are developing their capacity for critical thinking & social/emotional regulation.
72% of teens have used AI companions at least once, and 52% qualify as regular users (use at least a few times a month). 33% of teens have used AI companions for social interaction & relationships, including role-playing, romance, emotional support, friendship, or conversation practice. 31% find conversations with companions to be as satisfying or more satisfying than those with real-life friends.
Human interaction is still preferred & AI trust is mixed: 80% of teens who are AI companion users prioritize human friendships over AI companion interactions & 50% express distrust in AI companion information & advice, though trust levels vary by age
What are they using them for? Chart
Why are so many using characters ‘as a tool or program’ rather than regular chatbots when the companions are, frankly, rather pathetic at this?
Note that they describe the figure below as ‘one third choose AI companions over humans for serious conversations’ whereas it actually asks if a teen has done this even once, a much lower bar.
Goonpocalypse Now
I recalled an episode of Star Trek in which an entire civilization was taken out by a video game so enjoyable that people stopped procreating
Like, to an uncomfortable degree that is happening.
Is it, though? I understand that (his example he points to) OnlyFans exists and AI is generating a lot of the responses when uses message the e-girls, but I do not see this as a dangerous amount of ‘banging robots’? This one seems like something straight out of the Pessimists Archive, warning of the atomizing dangers of… the telephone?
It is easy to understand the central concern and be worried about the societal implications of widespread AI companions and intelligent sex robots. But if you think we are this easy to get got, perhaps you should be at least as worried about other things, as well? What is so special about the gooning? I don’t think the gooning in particular is even a major problem as such.
Her doing this could be good or bad for her prospects, it is not as if she was swimming in boyfriends before. I agree with Misha that we absolutely could optimize AI girlfriends and boyfriends to help the user, to encourage them to make friends, be more outgoing, go outside, advance their careers. The challenge is, will that approach inevitably lose out to ‘maximally extractive’ approaches? I think it doesn’t have to. If you differentiate your product and establish a good reputation, a lot of people will want the good thing, the bad thing does not have to drive it out.
Byrne Hobart: People will churn off of that one and onto the one who loves them just the way they are.
I do think some of them absolutely will.
Deepfaketown and Botpocalypse Soon
This seems to be one place where offense is crushing defense, and continuous growth in capabilities (both for GPT-4o style sycophancy and psychosis issues, or for companions, or anything else) is not helping, there is no meaningful defense going on.
Edited: | Tweet this! | Search Twitter for discussion