(2018-01-24) Rao Electric Monks And Fast Transients

Venkatesh Rao: Electric Monks and Fast Transients

I’ve been saying lately I want to turn myself into a bot. The Electric Monk is designed to believe things for you.

You could say that AI is based on the idea that the essence of being human is intelligence: cognitive prowess. I disagree. I think the essence of being human is the capacity for Belief. Traditional AIs can hold explicit beliefs and reason with them, but they are only just learning to believe beliefs.

Here’s the thing: when you can get computers to believe for you, your own thinking can get a lot more agile, burdened with a lot less belief inertia. In John Boydian terminology, outsourcing belief work leads to faster transients. This has a LOT of profound consequences.

The central transaction cost in human cognition is context-switching cost. This comes in two varieties: voluntary and forced.

In both cases, the measure of the challenge is how long it takes you to switch, or the length of the transient. Ideally, you want fast transients, but without sacrificing orientation quality.

How fast your transients can be depends on three things: the range of alternative orientations that you can call up (Model Agnosticism), the cost of the switch itself, and the degree of improvisation.

The lesson from the origin story of fast transients is that when machines do a lot of the work for you, you are only as good as your context-switching capabilities.

By having to devote less attention to the more basic behavior of struggling with controls, the pilot could pay more attention to other factors... The transients weren’t just faster, they were better.

You could say the F-86 Sabre was a better artificial believing system (ABS) than the MIG 15. It embodied beliefs about control systems in hydraulic form that the pilot didn’t have to.

To speed up your transients, you can do three things, and technology has a role to play in each of them: a) maintain richer orientation libraries, b) switch faster, c) believe fewer things.

both these well-known mechanisms, I argue, are highly limited. The don’t deliver orders of magnitude improvement in human performance

To get to orders of magnitude improvement in transients, you need to focus on believing fewer things. Which means your tech support systems have to believe more.

We use this mechanism in video games all the time. Human players do all the executive decision-making, the game characters execute kung-fu movies the players never could.

Note that another kind of plot element in The Matrix, the bullet-dodging and gravity-defying leaps, represent a different idea: that you can use out-of-game knowledge to hack the game. (Game Playing)

Infinite Scream is an example of a very (very) simple electric monk. It usefully embodies part of the mental models of an orientation so I don’t have to. It believes things for me.

Unlike automation, artificial belief systems retain the uniquely personal subjective posture, tacit knowledge, and context inherent in believing and acting from belief.

An ABS creates a technologically extended society of mind: Your electric monk is a swarm belief system attached to your biological brain as a cloud of tacit belief energy.

An ABS captures not just what I believe. It captures the way I believe it, and how I am pre-disposed to act from within a particular state of active belief.

To take a more serious example, take the now-extinct creature: the blackberry-driven execubot of the late 90s/early 00’s, before the iPhone changed that game.

The personal assistant of the late 90s was an extension of the executive’s beliefs about priorities and context. Since the blackberry could do so little, the admin assistant did a lot.

Executives would arrive at one meeting just minutes after the previous one, and the right emails and powerpoints would be ready for them. Opinions/decisions already half-formed.

In the first few years, the iPhone made things worse because it was so much more capable than the Blackberry. Some of the belief-work done by admins fell through the cracks as it was being reeled back in.

Now we’re recovering. It is no accident that the great power of the iPhone over the Blackberry required the invention of the Siris, Alexas, and Cortanas of the world.

The functional reference for the ABS is the human female administrative assistant of the Blackberry era. That's the starting point for the development of full-blown Electric Monks.

AI research initially believed the hard job was the CEOs. ABSes embody a recognition that perhaps the harder job is the admin assistant's.

But let’s return to the problem of context-switching

one way to measure the effectiveness of your context switching is to ask how many parallel threads of execution can you create the illusion of driving in your life?

I grew up with the metaphor of spinning plates

In the pre-mobile era, an executive's parallel bandwidth was the number of projects/process "plates" (in the traditional rather than GTD sense) they could keep spinning.

For senior execs, this often meant number of reports

Not coincidentally, this was close to the famous Magic Number 7, plus or minus 2.

Think beyond regular workplaces. Teachers handle between 15-40 parallel “student” threads via joint meetings. Pagerbot doctors in the US can be responsible for over 2000 patients.

This kind of scaling depends on a lot more belief being externalized into artificial belief systems. Education and healthcare are full of such externalized beliefs.

A hospital embodies many orientations — emergency responses, ICU routines, surgical preparation — each with dozens of mental models involved. But a doctor or nurse only has to do a fraction of the necessary work of context switching from situation to situation.

When you increase your leverage by having artificial believing systems do more of your high-inertia believing work for you, you can be more like a lightly flitting bird. You flutter from process to process, a light touch here, a gentle nudge there. Mind like water. Ridiculously fast transients not because you’re a martial artist but because all your belief baggage is checked in.

I'd like to turn into a set of recursive COO-bots, all the way down.

If there’s an ideal that defines this aspiration to transcendence-by-bothood, it’s that you make yourself informationally tiny. To paraphrase Paul Graham, keep your informational identity small.


Edited:    |       |    Search Twitter for discussion