(2023-03-27) Bradley Do Submarines Swim

Seamus Bradley: Do Submarines Swim? There are two specific criticisms I’ve seen that I have in mind here that I want to address. On the basis of examples like those linked to above, people will say that the (LLM) AI app “doesn’t understand” what it’s being asked, or that the AI “can’t think”. I don’t think these criticisms help, because they’re saying that the AI doesn’t have some property (understanding, thinking) that we don’t really know how to define. (consciousness)

The history of AI is basically the history of discovering things that we thought were tightly correlated with intelligence (whatever that means) are not that tightly correlated with intelligence. Pretty much as soon as we invented computers we discovered that an ability to do arithmetic is not the exclusive preserve of intelligent animals like us. We used to think that playing high-level chess was something that only intelligent creatures like us could do.

I can’t put it better than Edsger Dijkstra, who said “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

One obvious thing that LLMs are notorious for is something that, for some damn reason, people are describing as “hallucinating”.

They produce, in the technical sense of Harry Frankfurt, bullshit. That is, they do not care whether the information they provide is true or false. (There’s so many caveats I want to add to this last sentence that I’m going to write a whole paragraph at the end all about them.)

What is missing, arguably, is any sort of model of the way the world is, or any sort of concept of a “fact” or truth and falsity. On top of that, LLMs are also missing any sort of introspection of what facts they know, and, importantly, what they don’t know.

These features – a concept of a fact, and introspective access to our own state of knowledge – are obviously key features of our cognitive make up, and arguably they are both important parts of whatever intelligence is. (Draw your own connections between introspection and Godel, Escher, Bach here.)

The other thing that LLMs seem to be terrible at is, for want of a better term, common sense.

What’s fascinating is that these aspects of cognition – facts, introspection, inference – are among the very first things that people thought an artificial intelligence would need to be considered as such

this is not moving the goalposts. These have been the goalposts all along, but we just got seduced by the bullshit and the ability to play games

let’s have that paragraph of caveats. You can skip this paragraph if nothing about the following sentence bothered you: “That is, they do not care whether the information they provide is true or false.” First.... (skipping rest)

Large Language Models (and other kinds of Machine-Learning-based AIs) are remarkably good at some things, and they’re getting better at an incredible rate.

But how good they are at some things makes the obvious flaws they have all the more striking

It is possible to more clearly define what precisely the failures of these AIs are, and when we do that, it is striking that they appear to be bad at precisely the things that the old-fashioned logic-based AIs were good at. I’m not arguing for a return to Good Old-Fashioned AI, or that expert systems are a viable route to Artificial General Intelligence, but it really is striking how much the most glaring flaws of these LLMs match up with precisely what the strengths of the symbolic AI approach.


Edited:    |       |    Search Twitter for discussion