(2023-04-07) Willison We Need To Tell People Chatgpt Will Lie To Them Not Debate Linguistics

Simon Willison: We need to tell people ChatGPT will lie to them, not debate linguistics. ...form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated.

I completely agree that anthropomorphism is bad: these models are fancy matrix arithmetic, not entities with intent and opinions.

But in this case, I think the visceral clarity of being able to say “ChatGPT will lie to you” is a worthwhile trade.

They appear astonishingly capable, and their command of human language can make them seem like a genuine intelligence, at least at first glance.

But the more time you spend with them, the more that illusion starts to fall apart.

We need to explain this in straight-forward terms

ChatGPT cannot be trusted to provide factual information.

Systems like ChatGPT are not sentient, or even intelligent systems

It is vitally important that new users understand that these tools cannot be trusted to provide factual answers

We should be shouting this message from the rooftops: ChatGPT will lie to you. That doesn’t mean it’s not useful—it can be astonishingly useful, for all kinds of purposes... but seeking truthful, factual answers is very much not one of them

at this point using ChatGPT in the way that I do feels like a massively unfair competitive advantage. I’m not worried about AI taking people’s jobs: I’m worried about the impact of AI-enhanced developers like myself.

It genuinely feels unethical for me not to help other people learn to use these tools as effectively as possible. I want everyone to be able to do what I can do with them, as safely and responsibly as possible.


Edited:    |       |    Search Twitter for discussion