(2019-02-26) Constantin Humans Who Are Not Concentrating Are Not General Intelligences
Sarah Constantin: Humans Who Are Not Concentrating Are Not General Intelligences. Recently, OpenAI came out with a new language model that automatically synthesizes text, called GPT-2.
The scary thing about GPT-2-generated text is that it flows very naturally if you’re just skimming, reading for writing style and key, evocative words.
But if I read with focus, I notice that they don’t make a lot of logical sense.
OpenAI HAS achieved the ability to pass the Turing test against humans on autopilot.
I know of a few people, acquaintances of mine, who, even when asked to try to find flaws, could not detect anything weird or mistaken in the GPT-2-generated samples
Robin Hanson’s post Better Babblers is very relevant here. He claims, and I don’t think he’s exaggerating, that a lot of human speech is simply generated by “low order correlations”, that is, generating sentences or paragraphs that are statistically likely to come after previous sentences or paragraphs.
Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”
I’ve interviewed job applicants, and perceived them all as “bright and impressive”, but found that the vast majority of them could not solve a simple math problem. The ones who could solve the problem didn’t appear any “brighter” in conversation than the ones who couldn’t.
I’ve taught public school teachers, who were incredibly bad at formal mathematical reasoning (I know, because I graded their tests), to the point that I had not realized humans could be that bad at math — but it had no effect on how they came across in friendly conversation after hours.
I also noticed, upon reading GPT2 samples, just how often my brain slides from focused attention to just skimming.
The mental motion of “I didn’t really parse that paragraph, but sure, whatever, I’ll take the author’s word for it” is, in my introspective experience, absolutely identical to “I didn’t really parse that paragraph because it was bot-generated and didn’t make any sense so I couldn’t possibly have parsed it”, except that in the first case, I assume that the error lies with me rather than the text. This is not a safe assumption in a post-GPT2 world. Instead of “default to humility” (assume that when you don’t understand a passage, the passage is true and you’re just missing something) the ideal mental action in a world full of bots is “default to null” (if you don’t understand a passage, assume you’re in the same epistemic state as if you’d never read it at all.) Maybe practice and experience with GPT2 will help people get better at doing “default to null”?
Edited: | Tweet this! | Search Twitter for discussion
No backlinks!
No twinpages!