(2022-02-13) ZviM On Bounded Distrust

Zvi Mowshowitz: On Bounded Distrust. Response to (2022-01-26) Alexander Bounded Distrust. Would that it were that simple.

Knowing what information you can and can’t extract, and what claims you can trust from what sources in what contexts to what extent, is a vital life skill.

Making real efforts from your unique epistemic perspective, will result in a unique set of heuristics for who you can and cannot trust.

Scott’s model and my own have much in common here, but also clearly have strong disagreements on how to decide what can and cannot be trusted. A lot of what my weekly Covid posts are about is figuring out how much trust we can place where, and how to react when trust has been lost.

I’ll start with the parts of the model where we agree, in list form.

The difference is that Scott seems to think that the government, media and other authority figures continue mostly to play by a version of these rules that I believe they mostly used to follow. (game rule)

Whereas in 2022, after everything that has happened with the pandemic and also otherwise, I strongly believe that the trust and epistemic commons that existed previously have been burned down. The price of breaking the old rules is lower, but it is more than that. The price of being viewed as actually following the old rules is higher than the cost of not following them, in addition to the local benefits of breaking the old rules. Thus the old rules mostly are not followed.

One of the new rules is to pretend (to pretend?) to be following the old rules, which helps. The new rules are much less about tracking physical truth and much more about tracking narrative truth.

Here are some of the things I am conspicuously excluding due to length and time.

Thus I’m not ‘putting it all together’ in an important sense. Not yet.

Now, on to the One Time.

There’s a lot of great tricks that only work once.

This is the high leverage moment.

if it succeeds you get your One Time back, but damn it This Had Better Work or there will be hell to pay

That’s one way of thinking about the price one has to pay for breaking some of these rules. Would this be slight annoyance? Or would you be cashing in your One Time?

Shooting at Yankee Stadium

Scott frames this as you being a liberal and thus lacking trust in Fox as a source, but it’s important to note that this does not matter. Either Fox News is trustworthy in a given situation, or it is not

Anyway, this is his first example

The combination of these factors is very strong, and in the absence of counterevidence I would treat this as true with probability of essentially (1 minus epsilon).

It’s probably still a mass shooting, but if my life depends on that being true, I’m going to double check.

Scott’s next hypothetical

Fox is saying that police have apprehended a suspect

Once again, yes, of course. This is no longer an admission against interest, but I notice this is an actual red line that won’t be crossed

If this was more speculative, they would use particular weasel words like ‘believed to (be/have)’ at which point all bets aren’t quite off but the evidence is not very strong.

However, I don’t agree with this, and even more don’t agree with the sign-reversed version of it (e.g. flop MSNBC for FOX and reverse all the facts/motivations accordingly): It doesn’t matter at all that FOX is biased.

if someone came into Scott’s office and said ‘I think FOX’s story today about that Saudi terrorist is importantly false’ then it would be a mistake to suggest therefore putting this person on medication or asking them to go to therapy.

you’re probably right. Last time I checked, they do have a red line there. But there’s a bunch of red lines I thought they had (and that I think previously they did have) that they’ve crossed lately, so how confident can we be?

Scott links to Everybody Knows to indicate this is a ‘the savvy know this and then treat it like everyone knows.’ But the savvy are necessarily a subset, and not all that large a subset at that. Not only does everyone very much not know this, I don’t even know this. (cf Common Knowledge)

for all our explicit disagreements, I expect Scott in practice to be using almost the same heuristics I am using here if such events were to happen, with the difference being that I think Scott should be adjusting more for recent declines in deserved trust, and him likely thinking I’m adjusting too far.

Lincoln and Marx

I’m going to first deal with the Lincoln and Marx example, then with the 2020 election after, although Scott switches back and forth between them.

The body of the article is a real piece of work. I didn’t need to see the counterargument to know it stinks, only to know exactly how much it stinks.

this seems a lot like what The New York Times did to Scott Alexander, drawing the desired associations and implications by any means technically available

The clues are all there. This is transparent obvious bullshit. It’s still easy to not spot the transparent obvious bullshit. When one is reading casually or quickly, it’s a lot easier to do a non-literal reading that will effectively lie to you, than the literal reading that won’t.

Brockell did not misread anything. Brockell looked for words that could be written to give an impression Brockell wished to convey while not crossing the red line of saying definitively false things of the wrong type, and Brockell found the best such words that could be found. There are no ‘faulty conclusions’ here, there are only implausible insinuations.

the rebuttal is deeply convincing, and the fact that the original made it into the Washington Post should be deeply embarrassing. Yet it was not. Scott notes that it was not, that everyone forgot about it. Scott seemingly thinks not only that the Washington Post will pay zero price for doing this, but that this was entirely predictable.

This paragraph, for example, is all completely true aside from the questionable ‘was surrounded by socialists’ but also is also completely obvious nonsense. It gives the impression that conclusions should be drawn without actually justifying those conclusions at all, which is classic.

the question I have is: What makes the rules observed here different from the rules elsewhere? My answer to that is nothing. The rules are the same.

This is exactly the level of misleading I expect any time there is a narrative and an interest in pushing that narrative.

It is often said that if you read an article in a newspaper about the field you know best it will make statements that are about as accurate as ‘wet ground causes rain,’ and you should then consider that maybe this isn’t unique to the field you know best. That certainly matches my experience, and that’s when there isn’t an obvious narrative agenda involved. When there is, it’s a lot worse.

The 2020 Election

To be safe, I’ll reiterate up front that I am very confident the 2020 election was not rigged. But I didn’t get that confidence because liberal media sources told me everything was fine, I got it because I have a detailed model of the world where there’s lots of strong evidence pointing in that direction

Notice that Scott’s argument rests here on the difference between the election article and the Marx article. The Marx article should not be believed. But I notice that I expect both articles to be following the same standards of evidence and honesty. Whoops.

My expectation is that the WaPo article will have a strong and obvious agenda, and that it will be entirely unconvincing to anyone who hadn’t already reached the conclusion that the 2020 election wasn’t rigged, and will primarily be aimed at giving people a reference with which to feel smug about the stupid people who were ‘fooled by the Big Lie’ and think the 2020 election was rigged.

I think this headline is smug and has the asshole nature and makes it clear that this is how words are supposed to work and that none of this is an accident.

If you’re going to put lines like ‘Trump, never bound to reality’ into your statement, it’s really hard to complain that people on the other side aren’t viewing you as a credible source.

We’re agreed that they lie all the time. And Scott is making the Bounded Distrust argument that this doesn’t much matter. That argument would need to apply equally to Donald Trump. And it seems like it does.

...the rule for politicians. Who, with notably rare exceptions, will lie, to your face, all the time, about actually everything.

It’s worth noting that the linked-to report from Wisconsin, also in WaPo, was better on many dimensions, not perfect but definitely coming from a world in which there is more focus on physical world modeling and less on narrative.

actually is functionally false, conflating different numbers at least three times.

When you’re willing to make this level of misstatement about the core question at issue, it makes it that much harder to know where you can still be credible.

The window of what we are forced to treat as real keeps narrowing.

Part of what’s going on is that this ‘conspiracy theorist’ label is a threat being used, and you have to do things to avoid labeled that way. In particular, you need to notice what everybody knows is a conspiracy theory right now, and avoid advocating for it.

Things like this Washington Post article tell us nothing we don’t already know. All of the work is relying on this line and similar logic: The 2020 election got massive scrutiny from every major institution.

The liberal media could and did essentially use their One Time on Donald Trump in various ways, and paid the price in future credibility for doing so, but even with that it wouldn’t have been enough to sell us a centrally fraudulent election in a way that couldn’t be noticed.

you get extremely high confidence of no fraud. What you don’t get is especially bounded distrust in the media sources involved.

As I was writing this Marginal Revolution linked to this excellent post about why the USA is unlikely to face civil war. Among other things, it notices that various measurements of America’s democracy were altered to make Trump look scary in ways that don’t make any sense

You can fairly say once again, blah blah blah, none of that is specific actual physical world falsifiable claims.

But this kind of thing is then cited as a ‘source’ to back up claims and sounds all scientific and tangible even though it’s not

And also, these are ‘experts’ giving their opinions, so now ‘experts’ who aren’t giving physical world falsifiable (in practice, not in theory) claims need to also be ignored by that standard.

Basically, I’m saying no, you can’t evaluate any of this by saying ‘look at all these experts’ and ‘look at all these institutions’ without also using your brain to think about the situation

Science

Scott tells the story this story.

they found that immigrants were responsible for a disproportionately high amount of some crimes in Sweden.

It counts as ‘scientific misconduct’ for you to not be able to justify how your research would ‘reduce exclusion and improve integration.’

Which is odd.

It means it is official policy that wrongfacts are being suppressed to avoid encouraging wrongthink and wrongpolicy.

It also means that we can no longer have a thing called ‘scientific misconduct’ that one can use to identify sources one cannot trust.

But, Scott says, scientists have the decency to accuse them of misconduct for failure to reduce exclusion

If the claims were false, the scientists cracking down on wrongfacts would say the facts in question were wrong. By accusing someone of saying wrongfacts but not saying the wrong facts are wrong, you’re essentially admitting the wrongfacts are right

Let me tell you a story, in three acts.
All filter masks don’t work unless you’re a health professional.
All masks work.
Cloth masks don’t work.
At each stage of this story, scientists got on television to tout the current line. At each stage of this story, the ‘science denier’ style labels got used and contrary views were considered ‘dangerous misinformation.’

Yes, we did learn new information to some extent, but mostly we knew the whole story from the beginning and it’s still true now.

I could also tell you a story about vaccines. Something like this:
Vaccines are being rushed.
Vaccines are great and even prevent all transmission and you’re all set.
Vaccines are great but you still have to do all the other stuff and also you need a booster even if you’re a kid unless you’re in one of the places that’s illegal. But only the one, definitely, that’s all.

Once again, yes, you could say that the information available changed. On boosters, I’m somewhat sympathetic to that, and of course Omicron variant happened, but don’t kid yourself. Motivations changed, so the story changed.

Then there’s social distancing and ‘lockdowns’ and protests where the scientists declared that social justice was a health issue and so the protests weren’t dangerous. Which are words that in other contexts have meaning.

There are the claims early on of ‘no community spread’ while testing was being actively suppressed via the CDC requiring everyone to use only its tests when it knew they didn’t work.

There’s Anthony Fauci saying we’d get to herd immunity at one number, then saying that when we’d made enough progress on vaccination he felt free to increase the number a bit more, indicating he didn’t care about what the real number was.

And in each case, the relevant ‘expert’ people who are wearing official ‘trust the science’ lapel pins explicitly lied, over and over again, using different stories, right to our f***ing faces.

So when we say that scientists ‘don’t lie directly’ we need to narrow that down a bit.

Because they did and they got caught.

If there is a published paper or even pre-print in one of many (but not all) jurisdictions, I mostly assume that it’s not ‘lying about specific actual physical world falsifiable-in-practice-if-false claims.’ Mostly. And that’s it. That’s all I will assume about the paper.

I will not assume it isn’t p-hacked to hell, that it has any hope of replication, that anything not explicitly mentioned was done correctly, that the abstract well-described the methodology or results, that their discussion of what it means is in good faith

So yeah, anthropogenic global warming is real and all that, again we know this for plenty of other good reasons, but the reasoning we see here about why we can believe that? No. This is not the type of statement that we can assume scientists wouldn’t systematically lie about.

The petition tells you that scientists are being rewarded for stating the narrative that there is anthropogenic global warming. And they would presumably be severely punished for saying the opposite.

The petition does not tell you that these people sincerely believe anything, although in this case I am confident that they mostly or entirely do. It definitely does not tell you that these people’s sincere beliefs are right, or even well-justified, although in this case I believe that they are.

In my case, if I believed the local wrongthink, I would avoid lying by the strategy of being very very quiet on the whole topic

Others are surely thinking along similar lines, except not everyone has the integrity and/or freedom to simply say nothing in such spots.

Then Scott goes on to say this. (before you object that some different global-warming related claim is false, please consider whether the IPCC has said with certainty that it isn’t

So it sounds like the standard is specifically that the IPCC does not make statements that false things are definitely true. Whereas if ‘some climatologists’ make such claims, that’s unsurprising. So when enough scientists of various types go around saying we are literally all going to die from this and manage to convince a large portion of an entire generation to think they are so doomed they will never get to grow old, we can’t even treat that as evidence of anything, let alone call them out on that, because the IPCC hasn’t specifically said so.

Yet I don’t see them or any other ‘experts’ standing up to boldly tell everyone that yes we have much work to do but maybe we can all calm down a bit

Whereas I see other ‘experts’ adding fuel to this fire, presumably because they think that only by getting people into that level of panic can they get people to actually do something

Some people wonder how so many people could not Trust the Science™ in such matters. I don’t wonder about that.

Nor do I think this is the reason Scott believes in AGW

Ivermectin One Last Time Oh Please God Let This Be The Last Time

I think some people are able to figure out these rules and feel comfortable with them, and other people can’t and end up as conspiracy theorists. A conspiracy theorist, officially now defined as anyone believing the Official Lying Guidelines are more flexible than you think they are (see: everyone driving slower than me is an idiot, anyone driving faster than me is a maniac). (cf Goldilocks Zone)

Even though the rules keep loosening over time, and sometimes some things labeled ‘conspiracy theory’ turn out true, and also many things labeled ‘conspiracy theory’ don’t actually even require a conspiracy, that’s just a way of dismissing the claims.

Yes, experts are sometimes biased, if you’re being charitable, or ‘engaged in an implicitly coordinated suppression of information in conflict with the current narrative’ if you’re being more realistic. Also, sometimes they’re simply wrong, they have limited information to work with and limited cognition and lousy incentives and lives and this whole science thing is hard

I mean, the experts still haven’t come around to the Vitamin D train, so ‘the experts aren’t impressed by the evidence’ isn’t exactly what I’d think of as a knock-down argument against non-risky Covid treatments.

Once again, I agree with Scott on the bottom line. As far as I can tell, Ivermectin doesn’t work. But once again, I don’t think Scott’s stated algorithm is a good one, although once again I happily don’t think Scott is using his stated algorithm in practice. I think he’s mostly using mine, with the main difference being that I think he hasn’t sufficiently adjusted for how much the goalposts have been moved.

More likely, Scott noticed that the people pushing for Ivermectin were part of the Incorrect Anti-Narrative Contrarian Cluster who also push a bunch of other anti-narrative things that are not true, rather than part of the Correct Contrarian Cluster (CCC).

One can (and whether one realizes it or not, one does to some extent) use it in the climate change example, noticing that full denial of climate change is very much part of the Incorrect Anti-Narrative Contrarian Cluster (ICC), while also noticing that moderate positions are conspicuously not in the ICC but rather in the CCC.

Of course, that’s a level of attention paying and reasoning that’s in many ways harder than doing the core work oneself, but it’s also work that gets done in the background if you’re doing a bunch of other work, so it’s in some sense a free action once you’ve paid the associated costs.

this is all very very exploitable, as in it’s being exploited constantly.

I also notice that Scott didn’t choose any examples where the narrative in question is centrally lying to us, so it’s hard to tell where he thinks the border is, until the final note about the harvest.

Glorious Harvests

Scott’s next argument is that our Official Narrative Pronouncements can be thought of as similar to Soviet pronouncements

the clueless people need to realize that the savvy people aren’t always gullible, just more optimistic about their ability to extract signal from same.

I mean the clueless people aren’t exactly wrong. The government is still lying to them in year six, in the sense that the harvest is unlikely to be what you or I would call ‘glorious,’ and they will doubtless find some other ways to screw the little guy that aren’t taxes or revenue enhancements.

That doesn’t mean the ‘savvy’ position is reliable. Being savvy relies on being unusually savvy, and keeping track of how far things have moved

Those rules are anti-inductive, in the sense that they depend on the clueless remaining clueless.

If the system is distributed rather than centrally determined, and it’s a bunch of people on social media running around labeling things as other things, then you see a gradual ramping up of everything

If I want to say something is glorious I have to be two steps ahead of whatever I view as the ‘standard’ description

Bounds, Rules, Norms, Costs and Habits

there mostly aren’t lines you simply do not cross. There are only lines that are expensive to be caught crossing when similar others are not also caught crossing them.

This is a variant of having correlated debts, or losing money in the same way those around you lose money. You mostly only get punished for getting singled out as unusually bad. Thus, the more you are pushing the same lies as others and breaking the same rules, especially as part of The Narrative, you are effectively protected, and thus the price of breaking the rules is far lower.

When deciding what to do, various players will

look at the costs and benefits of following or breaking what they think of as ‘the rules’ in various ways, mostly intuitively, and decide what to do about that in context.

The combination of these factors does often mean that there is effectively a calibrated response to any given situation.

Thus the chosen details of what is claimed and said actually can tell you quite a lot about the underlying physical world situation, if you can remain sufficiently well-calibrated

Combining a variety of sources improves your results.

By observing the differences in their responses, you can learn a lot about what’s going on by asking what would make all their responses make sense at once.

The principle that This is Not a Coincidence Because Nothing is Ever a Coincidence will serve you well here on the margin.

What Is the Current Translation Matrix?

seems only fair to tell where I am at.

Here’s mine for politicians: They are on what I call simulacra level 4, and they are moving symbols around without a direct connection to the underlying reality.

Assume by default that they lie, all the time, about everything, including intentionally misstating basic verifiable facts, but that to model them as even thinking on those terms is mostly an error.

For traditional news sources like the Washington Post, CNN or FOX: Assume until proven otherwise that they are engaging primarily in simulacra level 3 behavior, pushing the relevant Narrative and playing to and showing their loyalty to their side of the dialectic to the extent possible

subject to the constraints they are under

Those constraints are a very narrow form of technically correct, the best kind of correct.

The Marx/Lincoln story is an excellent example of exactly where this line is. Assume that like that story, everything will go exactly up to that line to the extent it is useful for them to do so, but not over it. Then, based on what content is included, you know they didn’t have any better options, and you can back-chain to understand the situation.

Assume that they are constantly saying things similar to ‘wet ground causes rain’ when they want to be against wet ground, and also framing everything with maximum prejudice. Everything given or available to them will be twisted to inflict maximum Narrative (and get maximum clicks otherwise)

Basically, yes, there is a teeny tiny sense in which they will not outright lie, in the sense that there is a Fact Checker of some kind who has to be satisfied before they can hit publish, but assume it is the smallest sense possible while still containing at least some constraint on their behavior.

Also assume that headlines have (almost) zero constraints on them, are written by someone who really, really doesn’t care about accuracy, and are free to not only be false but to directly contradict the story that follows, and that they often will do exactly that.

If there’s an editorial, there are no rules

If it’s in any way subjective, there are no rules

For ‘scientists’ and ‘experts’:

If you want to find a ‘scientist’ or ‘expert’ to say any given thing, you can.

If you have some claim that fits the Narrative, then unless it is a full strict-false-and-one-could-prove-it violation, you can get lots of experts/scientists to sign off on it.

Mostly any given expert will have slightly more constraints on than that, and will follow something similar to the news code, and will also have some amount of internal pressure that causes the vigor of endorsement to be somewhat proportional to the accuracy of the statement

The more technical the talking gets, the more you can trust it (to the extent you can understand it)

Also understand that the systems and rules are set up at this point to allow for very strong suppression of dissent, and creation of the illusion of consensus, through the use of social pressures and isolated demands for rigor and other such tactics, without need to resort to sharp falsifiable statements.

Expert consensus that is falsifiable-in-practice-in-a-punishing-way can still largely be trusted.

Expert consensus that is not that, not so much.

You should definitely expect the experts in any given field to greatly exaggerate the importance of the field at every turn, and to warn of the dire consequences of its neglect and our failure to Do Something

There are other sources, specific sources, where the translation matrix is less extreme, and I of course do my best to draw as much as possible from such sources.

So What Do We Do Now?

We decide how much time and effort we want to spend maintaining our calibration and translation matrix

There are three basic approaches here.

One is to basically stop caring so much about the news. This is a good strategy for many, and in most times.

A second option is to keep very careful track of the physical world conditions, do lots of your own work and not need to rely on secondary sources like newspapers. I assure you that mostly this is a lot of work and you only want to do this in carefully selected sub-realms

The other option is division of labor and outsourcing.

You get to choose your portfolio of sources.

That can be as simple as your spouse or a good friend that you know you can trust.

It can also aggregate various community sources

I sometimes hear that someone has decided to outsource their Covid-19 perspectives to me and my posts in this way. (I am definitely one of these.)

Everyone I list on my links and blogroll qualifies as someone I am mostly willing to trust.

That doesn’t mean I fully trust their judgment, or that it’s all created equal, but there’s a sense in which I can relax when engaging with such sources. There’s also, of course, a sense in which I can’t relax even when dealing with most of those sources, to varying degrees.

But what about the global problem as a global problem?

There are no easy answers there.

My blog is in part an attempt at an answer. This seems very much like a Be The Change You Want to See in the World situation.


Edited:    |       |    Search Twitter for discussion