(2017-12-02) Zvim More Dakka

Zvi Mowshowitz on More Dakka. Eliezer Yudkowsky's book Inadequate Eqilibria is excellent. I recommend reading it, if you haven't done so... My hope here is to offer both another concrete path to finding such opportunities, and additional justification of the central role of social control (as opposed to object-level concerns) in many modest actions and modesty arguments.

Eliezer uses several examples of civilizational inadequacy. Two central examples are the failure of the Bank of Japan and later the European Central Bank to print sufficient amounts of money (in 2013?), and the failure of anyone to try treating seasonal affective disorder (SAD) with sufficiently intense artificial light.

In a MetaMed case, a patient suffered from a disease with a well-known reliable biomarker and a safe treatment. In studies, the treatment improved the biomarker linearly with dosage.... No one tried increasing the dose enough to reduce the biomarker to healthy levels.

In his excellent post Sunset at Noon, Raymond points out Gratitude Journals: "Rationalists obviously don't actually take ideas seriously. Like, take the Gratitude Journal. This is the one peer-reviewed intervention that actually increases your subjective well being, and costs barely anything. And no one I know has even seriously tried it. Do literally none of these people care about their own happiness?"

Gratitude journals are awkward interventions, as Raymond found, and we need to find details that make it our own, or it won't work. But the active ingredient, gratitude, obviously works and is freely available.

I once sent a single gratitude letter. It increased my baseline well-being. Then I didn't write more. I do try to remember to feel gratitude, and express it. That helps. But I can't think of a good reason not to do that more, or for anyone I know to not do it more.

In all four cases, our civilization has (it seems) correctly found the solution

There's probably a level where side effects would happen, but there's no sign of them yet.

We know the solution. Our bullets work. We just need more. We need More (and better) (metaphorical) Dakka - rather than firing the standard number of metaphorical bullets, we need to fire more, absurdly more, whatever it takes until the enemy keels over dead.

If it helps but doesn't solve your problem, perhaps you're not using enough.

We don't use enough to find out how much enough would be, or what bad things it might cause. More Dakka might backfire. It also might solve your problem.

The Bank of Japan didn't have enough money. They printed some. It helped a little. They could have kept printing more money until printing more money either solves their problem or starts to cause other problems

Yes, some countries printed too much money and very bad things happened, but no countries printed too much money because they wanted more inflation.

Doctors saw patients suffer for lack of light. They gave them light. It helped a little. They could have tried more light until it solved their problem or started causing other problems. They didn't.

Doctors saw patients suffer from a disease in direct proportion to a biomarker. They gave them a drug. It helped a little, with few if any side effects. They could have increased the dose until it either solved the problem or started causing other problems. They didn't.

People express gratitude. We are told it improves subjective well-being in studies. Our subjective well-being improves a little. We could express more gratitude, with no real downsides. Almost all of us don't.

A decision was universally made that enough, despite obviously not being enough, was enough. 'More' was never tried.
This is important on two levels.

The first level is practical. If you think a problem could be solved or a situation improved by More Dakka, there's a good chance you're right.

Sometimes a little more is a little better. Sometimes a lot more is a lot better. Sometimes each attempt is unlikely to work, but improves your chances.

If something is a good idea, you need a reason to not try doing more of it.

The second level is, 'do more of what is already working and see if it works more' is as basic as it gets. If we can't reliably try that, we can't reliably try anything.

Why would this be an overlooked strategy?
It sounds crazy that it could be overlooked. It's overlooked.

Eliezer gives three tools to recognize places systems fail, using highly useful economic arguments I recommend using frequently:

1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally

2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information

3. Systems that are broken in multiple places so that no one actor can make them better

In these cases, I do not think such explanations are enough. (This article's 4 cases.)

Here is my model. I hope it illuminates when to try such things yourself.

Two key insights here are The Thing and the Symbolic Representation of The Thing, and Scott Alexander's Concept-Shaped Holes Can Be Impossible To Notice. Both are worth reading, in that order.

I'll summarize the relevant points.

The standard amount of something, by definition, counts as the symbolic representation of the thing. The Bank of Japan 'printed money.' The standard SAD treatment 'exposes people to light.'

They got results. A little. Better than nothing. But much less than was desired.

an important variant of 'use more,' 'do more' or 'do more often' is 'do it better.' (do better)

Being part of that second group is harder than it looks:
You need to realize the thing might exist at all.
You need to realize the symbolic representation of the thing isn't the thing.
You need to ignore the idea that you've done your job.
You need to actually care about solving the problem.
You need to think about the problem a little....
(Longer list)

Why is this list getting so long? What is that answer of 'don't do it' doing on the bottom of the page?

Let's go through the list.

You need to realize the thing might exist at all.

Scott gives several examples of situations in which he doubted the existence of the thing.

You need to realize the symbolic representation of the thing isn't the thing.

Scott gives several examples where he thought he knew what the thing was, only to find out he had no idea; what he thought was the thing was actually a symbolic representation, a pale shadow. If you think having a few friends is what a community is, it won't occur to you to seek out a real one.

You need to ignore the idea that you've done your job.

You've checked that box off by getting the symbolic version of the thing. It's easy to then think you've done the job and are somehow done.

Even if you didn't get what you wanted, your real job was to get the right to tell a story you can tell yourself that you tried to get it

You need to actually care about solving the problem.

Often people don't care much about solving the problem. They care whether they're responsible. They care whether socially appropriate steps have been taken.

You need to ignore the idea that no one could blame you for not trying.

People often care primarily about doing that which no one could blame them for.

Doing the normal means no one could blame you. If you don't grasp that this is a thing, read as much of Atlas Shrugged as needed until you grasp that. It should only take a chapter or two, but this idea alone is worth a thousand page book in order to get, if that's what it takes.

You need to not care that what you're about to do is unusual or weird or socially awkward.

We go around being normal, only guessing which slightly weird things would get us in trouble, or that we'd need to get someone else in trouble for! So we try to do none of them.

You need to not care that what you're about to do might not work.
Failing is just awful. Even things that are supposed to mostly fail. Even getting ludicrous odds. Only explicitly permitted narrow exceptions are permitted, which shrink each year.

You need to not care that what you're about to do is immodest.
By modesty, anything you think of, that's worth thinking, has been thought of. Anything worth trying has been tried, anything worth doing done. Ignore that there's a first time for everything. Who are you to claim there's something worth trying?

You need to not care about the implicit accusation you're making against everyone who didn't try it.
You're not only calling them wrong. You're saying the answer was in front of their face the whole time

That's what the Bank of Japan was actually afraid of. Nothing. A vague feeling they were supposed to be afraid of something, so they kept brainstorming until something sounded plausible.

The markets don't like it when we print too much money! The opposite is true. We have real time data. The Nikkei goes up on talk of printing money, down on talk of not printing money, and goes wild on actual unexpected money printing.

These worries aren't real. They're in your head.

If someone else has these concerns, the concerns are in their head, whispering in their ear. Don't hold it against them. Help them

My practical suggestion is that if you do, buy or use a thing, and it seems like that was a reasonable thing to do, you should ask yourself: Can I do more of this? Can I do this better? Put in more effort, more time and/or more money? Might that do the job better?

The bigger picture point is also important. These are the most obvious things. Those bad reasons stop actual everyone from trying things that cost little, on any level, with little risk, on any level, and that carry huge benefits. For other things, they stop almost everyone.

Adding that to the economic model of inadequate equilibria, and the fact that almost no one got as far as considering this idea at all, is it any wonder that you can beat 'consensus' by thinking of and trying object-level things?
Why wouldn't that work?


Edited:    |       |    Search Twitter for discussion

No twinpages!