(2024-10-03) Farrell After Software Eats The World What Comes Out The Other End
Henry Farrell: After software eats the world, what comes out the other end? Not too long ago, my notions of the cultural consequences of Large Language Models (LLMs) were guided by a common metaphor of monsters of appetite. As Cosma and I and said in an article, “the political anthropologist James Scott has explained how bureaucracies are monsters of information, devouring rich, informal bodies of tacitly held knowledge and excreting a thin slurry of abstract categories that rulers use to “see” the world.” LLMs would do much the same thing to human culture. (2023-06-21) Farrell Shalizi Artificial Intelligence Is A Familiar-looking Monster
I saw a lot of online commentary suggesting that LLMs would evolve to combine the less attractive features of the Human Centipede and the Worm Ourobouros, as they increasingly fed on their own waste products
Being a Philip K. Dick fan, I had a specific PKD riff on this, building on the moment in Martian Time-Slip when an imagined journey into the future collapses into horror. Dick was fascinated with the notion of entropy, and he describes a terrifying kind of context collapse.
But recently, I’ve grown more inclined towards a different controlling metaphor.
I’ve turned to a different image of recursivity: the disturbing moment in Spike Jonze’s movie, Being John Malkovich.
Kaufman is influenced by PKD, and it shows.
The force of this image really came home to me over the last couple of days, as I started to play around with Google’s NotebookLM.
The point of Kaufman’s scene is that not just any old rubbish (or gubbish) comes out the other end of the tunnel. Instead, we end up in a world of sameness, a universal society of Malkoviches saying Malkovich, Malkovich, Malkovich! to each other.
There is good reason to believe that these models are centripetal rather than centrifugal. On average, they create representations that tug in the direction of the dense masses at the center of culture, rather than towards the sparse fringe of weirdness and surprise scattered around the periphery.
how could I possibly refuse the opportunity of turning Programmable Mutter into a very literal programmable mutter, and seeing what happened?
The result was superficially very impressive.
The actual content was an entirely different story
didn’t accurately summarize what I had said in the posts that it talked about.
It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation
What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.
The large model had a lot of gaps to fill, and it filled those gaps with maximally unsurprising content.
they will parse human culture with a lossiness that skews, so that central aspects of that culture are accentuated, and sparser aspects disappear in translation
technology is not destiny. Perhaps different cultural engines will have different affordances
Edited: | Tweet this! | Search Twitter for discussion