(2024-04-17) Farrell Cybernetics Is The Science Of The Polycrisis

Henry Farrell: Cybernetics is the science of the polycrisis. One of the most interesting ‘might have been’ moments in intellectual history happened in the early 1970s, when Brian Eno traipsed to a dingy cottage in Wales to pay homage to Stafford Beer.

Beer said ‘I carry a torch, a torch that was handed to me along a chain from Ross Ashby.” … He was telling me the story of the lineage of … this body of ideas [cybernetics] and said ‘I want to hand it to you, I know it’s a responsibility and you don’t have to accept, I just want you to think about it’.

Fifty years later, Beer has found a different torchbearer, one, who, like Eno, has a visible sense of humor about it all. Dan Davies has written a new book, which blows the fug off Stafford Beer Thought, shooing away all the mud-encrusted dogs so that you see what is really useful.

....hasn’t just revived Beer’s version of cybernetics but presented it in clear, easily read chapters, and remade it for a different era.

Dan argues that management cybernetics is the great lost tradition of thinking that might actually provide an alternative to neo-classical economics.

So why is Beer’s version of management cybernetics (to be distinguished from the optimization-focused versions that were popular in the USSR and China), potentially valuable?

It’s almost certainly a good thing that Eno decided to make music rather than become a guru of cybernetics

It builds a bridge to span the yawning divide between how we think about information, and how we think about the economy, politics, and society

There is a lot we could learn if we understood social, political and economic relations as really involving information flows. And Beer’s version of cybernetics provides one very intellectually attractive way of doing this.

Dan draws on Beer, who pulls from William Ross Ashby, the person from whom he said that he inherited the living flame. And Ashby’s “Law of Requisite Variety” is a really, really important idea

there are many, many surprises - complex systems are by their nature difficult to predict. If you do want to anticipate these surprises, and even more importantly, to manage them, you need to have your own complex systems, built into your organization. And these systems need to be as complex as the system that you’re trying to manage.

Hence, the “Requisite Variety.” In Dan’s summarization “anything which aims to be a ‘regulator’ of a system needs to have at least as much variety as that system.” Or, put a little differently, “if a manager or management team doesn’t have information-handling capacity at least as great as the complexity of the thing they’re in charge of, control is not possible and eventually, the system will become unregulated.”

This points to the very important differences between the Soviet Union and China’s notions of cybernetics, and Beer’s approach. State socialist cybernetics mostly assumed, as does Silicon Valley today, that economic complexities could be disentangled, simplified and turned into easily solvable optimization problems

Sometimes, simplification-and-optimization will be very useful. Sometimes, it may leave you worse off than when you started. Read Francis Spufford’s Red Plenty for more.

Another important difference is that Beer and people in his tradition treat the math as a source of valuable metaphors rather than directly applicable methods...it is the kind of math that rubs your nose in crucial but annoying facts about the complexities of the world, without giving you handy means to turn these complexities into truly tractable simplifications. That doesn't seem terribly useful, beyond being a vibe.

its fundamental message is that while you can manage a complex environment, you cannot usually manage it away, without changing the environment, or (the more common default choice) pretending that the complexities don’t exist.

So how do you manage an inherently complex system? Beer talks about “variety engineering”, and points to two broad approaches to making it work

One has already been hinted at: attenuation. Here, you take what is complex, and you make it less so.

The second is amplification. Here, crudely speaking, you amp up the variety inside the organizational structures that you have built, so that it better matches the variety of the environment you find yourself in. Very often, this involves building better feedback loops through which different bits of the organization can negotiate with each other over unexpected problems.

There is a lot more to this - e.g. thinking about how different parts of the regulatory organization ought work as different ‘systems’ - but again, it’s management more than science

The great advantage of this approach is that it can be scaled up or down. The great disadvantage is that it offers you no inherent technique for figuring out which scale you ought be working at, or which particular means you ought be using at that scale

But - like good perspectives and techniques - once you have grasped what it tells you, you see examples of it everywhere

but first, a few examples.

Social media content moderation. This is an inherently horrible cybernetic task in ways that Mike Masnick’s “Impossibility Theorem” captures nicely.

the bad actors can display a lot of ingenuity in trying to figure out how to counteract moderation and propel things in bad directions

Social media at scale is inherently unpredictable, which is another way of saying that there is an enormous variety of possible direction that millions of people’s interactions can take

The result is (a) enormous variety, and (b) malign actors looking to increase this variety, and to push it in all sorts of nasty directions

Now, social media companies find themselves obliged to amplify (increasing their ability to moderate through hiring or investing in machine learning), to attenuate (limiting variety; e.g. by stifling political discussion as Meta’s Threads has done), or some combination (Bluesky and the Fediverse combine new tools with smaller scale and lesser variety in particular instances

when Meta decides that Threads will deal with the problem of spiraling political disagreement by dampening down all political discussions on its platform, it is dealing with a cybernetic problem using cybernetic means

But should it be Meta that is in charge of making such a profound and political decision? Cybernetics doesn’t provide any very specific answer to that question, but it makes it much easier to see the problem.

Equally, we need to recognize that if we are going to have to regulate vast swathes of the human conversation, we are going to face some messy and unhappy tradeoffs.

The proper constitution of the state. The most cybernetic book, apart from Dan’s, that I have read in the last few years is Jen Pahlka’s Recoding America.

If you read Jen’s book carelessly, you might come away with the impression that it is about the U.S. government’s incompetence at contracting out software development. If you read it carefully, you will realize that it is actually an applied informational theory of the state

Jen argues that we need to move away from top down decision making, to systems that will allow bureaucrats a lot more autonomy. She frames her argument for change in terms of “agile” software design, which would appear to have an awful lot in common with Beer’s approach to thinking about organization

Most of Jen’s examples involve information systems, because that is what she has worked on, but the logic extends far further. In particular, I think that it extends to an important debate that is happening right now over economic security policy making.

A lot of our current thinking about how to make such policy takes a brute force approach, dismissing efforts to calibrate and fine tune policy as unhelpful and irrelevant. Adam Tooze, for example, whom I agree with on vast swathes of issues, more or less dismisses “so-called “Swiss army-knife strategies” or “polysolutions” that try “to fix several interconnected problems at the same time” as an overly ambitious “optimizing approach,” which makes the “strong assumption” that “we do, in fact, have a pretty good idea of the major challenges and how they hang together.” Instead, he prefers big fixes for the most immediate pain points.

a lot of the time, properly conceived polysolutions will try to do lots of things at once, not because they have a clear expectation that all these things will work, but because they are experimenting

In other words, there is also a strong case for making policy agile and on the fly, exploring the landscape of possibilities, rather than exploiting what we think we know already. And when something really works, one can try to double down and see what happens!

Large chunks of my and Abe Newman’s recentish Foreign Affairs piece on the pathologies of economic security policy making apply Jen’s and Dan’s ideas to the pathologies of the national security state

The progress agenda. There is a lot of disagreement among and between liberals and people on the left over how the U.S. should think about progress. Some of that disagreement stems from disputes over whether we should prefer to solve collective problems or to prioritize democratic control.

This is a real disagreement, and one that I don’t think can be simply resolved. But it is one that the language of cybernetics could at least help clarify.

For better or worse, the language of cybernetics is a technocratic language, not a democratic one

But it can have some benefits too. One of the major reasons why neoliberalism, which is its own kind of technocracy, succeeded, is because it helped turn insuperable seeming political conflicts into manageable ones

As I read Elizabeth Popp Berman’s fantastic history, neo-liberalism succeeded not, or at least not simply, because of Milton Friedman, the Mont Pelerin Society and the rest of it. It came to dominate because it was the only plausible language that people could minimally agree on, at a moment when enormously consequential new policies needed to be enacted

Here, I’m riffing not just on Berman’s book, but a really great essay by Suresh Naidu, which makes the point that neo-liberalism will not be replaced by liberal humanism, because liberal humanism isn’t up to the task of managing a complex society at scale.

The great advantage of cybernetics is that it provides exactly a language that can span the chasm between computer science and the needs of the large administrative state. It surely isn’t the only candidate for that task. You could, for example, revive some of the ideas of Herbert Simon, which have slightly different valences and applications. But it is a pretty good one, with an excellent pedigree.

Dan’s book suggests that economists are rather more allergic to the spreadsheets of organizational administration than Suresh suggests, and that cybernetics provides an excellent understanding of how balance sheets and financial accounts, themselves being models, inevitably attenuate out the things that their creators don’t want to pay attention to, even while they serve to amplify the possibilities of control in other areas.

There is a case against too. It can easily collapse into handwaving. Its lack of mathematical precision over the specifics means that it will have an easier time developing a kind of folk wisdom that unsophisticated practitioners can latch onto, but a harder time reconciling different versions of that folk wisdom to preserve coherence in tasks carried out across very complex organizations

But even so, it holds immense promise. One of the greatest challenges we face is the mismatch between the vast complexity of the problems we need to solve (climate change; migration; international security), and the inadequacy of the informational and managerial institutions that we have to solve them. The bits of Dan’s book that I have not talked about explain why free market economics is incapable of of resolving them.


Edited:    |       |    Search Twitter for discussion