(2020-07-20) Chin Strong Opinions Weakly Held Doesnt Work That Well

Cedric Chin: ‘Strong Opinions, Weakly Held’ Doesn't Work That Well. Instead of withholding judgment until an exhaustive search for data is complete, I will force myself to make a tentative forecast based on the information available, and then systematically tear it apart, using the insights gained to guide my search

The only problem with it is that it doesn’t seem to work that well.

Eventually I read Phillip Tetlock’s Superforecasting, and then I gave up on ‘Strong Opinions, Weakly Held’.

But at which point do you change your mind?

The problem, of course, is that this is not how the human brain works.

So, you might ask, what to do instead?

Use Probability as an Expression of Confidence

It’s easy to have strong opinions and hold on to them strongly. It’s easy to have weak opinions and hold on to them weakly. But it is quite difficult for the human mind to vacillate between one strong opinion to another.

Tetlock’s stated technique was developed in the context of a geopolitical forecasting tournament

I remember thinking that geopolitical forecasting wasn’t particularly relevant to my job running an engineering office in Vietnam.

Annie Duke’s Thinking in Bets proposes the same approach, but drawn from poker, and the ‘rationalist’ community LessWrong has long-held norms around stating the confidence of their opinions

More importantly, Duke and LessWrong have both discovered that the fastest way to provoke such nuanced thinking is to ask: “Are you willing to bet on that? What odds would you take, and how much?”

you are forced to calibrate the strength of your belief. This makes it easier to move away from it.

Second: by framing it as a bet, you suddenly have skin in the game, and are motivated to get things right.

as new information trickles in, you are allowed to update the % confidence you have

this post is about thinking, not forecasting; I’m only confident to recommend one over the other because I’ve had enough experience with both as analytical tools

Tetlock had met up with Paul Saffo over the Good Judgment Project.

we confront a dilemma. What matters is the big question, but the big question can’t be scored. The little question doesn’t matter but it can be scored, so the IARPA tournament went with it.

Tetlock goes on to defend his approach: That is unfair.

Implicit within Paul Saffo’s “How does this all turn out?” question were the recent events that had worsened the conflict on the Korean peninsula.

it’s obvious that the big question is composed of many small questions.

if we ask many tiny-but-pertinent questions, we can close in on an answer for the big question

The answers are cumulative

I call this Bayesian question clustering

Another way to think of it is to imagine a painter using the technique called pointillism

There were question clusters in the IARPA tournament, but they arose more as a consequence of events than a diagnostic strategy. In future research, I want to develop the concept and see how effectively we can answer unscorable “big questions” with clusters of little ones.

These are two different worlds, with two different standards for truth. You decide which one is more useful.

So don’t bother. The next time you find yourself making a judgment, don’t invoke ‘strong opinions, weakly held’. Instead, ask: “how much are you willing to bet on that?” Doing so will jolt people into the types of thinking you want to encourage.

Update: Brad Feld has a 2019 blog post throwing shade on the technique.


Edited:    |       |    Search Twitter for discussion