(2020-02-21) Can We Agree On The Facts

Stian Haklev: Can we agree on the facts? In most discussions, whether in politics or in research, we tend to jump seamlessly between facts, logical inferences, theories, ideas and value judgments. This can be done manipulatively, but also seems to be “just human”. Are there better ways to reach agreements, and collaborate more effectively? (claim)

In 2014 I was a PhD student in Toronto, studying how computers can help groups of students to learn

At the same time, the city was engulfed in debating whether the ailing rapid transit system in Scarborough should be replaced with a streetcar system, which would reach a much larger area of the very low-density population, or a subway which would be faster, but serve a much smaller population.

This might seem like a debate where you would want the engineers and city planners to carefully map out all of the different costs, projections and consequences. In the end phase, this would still come down to a choice of values, and assumptions

However, there were a set of fact that no reasonable person could disagree about (at least not without careful research), which had been presented by city staff after detailed studies, and which should have been able to act as a baseline for a discussion

Instead, what happened was endless confusion, random numbers, contradictions, changing of topic, and general malarkey

One possible venue for mapping out the facts and arguments could have been argument maps, which we’ll look at next (Argumentation Visualization)

Literature reviews, “a grand map of the field”

As a young researcher struggling to orient myself in the literature, and seeing papers and research projects that didn’t seem to build on each other, or make any real progress, I really wished we had more coordinated attempts at mapping fields

One such attempt I came across was first mentioned in a grant proposal around Open Educational Resources from CMU and Open University. As one of their outputs, they proposed a research portal.

This research proposal was a knot in a long thread of research at the Knowledge Media Institute at OU, driven by people like Simon Buckingham-Shum and Anna de Liddo, who worked on an application called Compendium, that traced its roots back to Issue-Based Information Systems (IBIS) from the 1960s

They created a set of Evidence Hubs using Compendium, where people could map out claims, linking to evidence and counter-evidence.

This software was also used for civic facilitation, where a trained facilitator would not only take notes, but actively use the representation of collective understanding to guide the group to proceed productively. Here’s one great example, where you can visit all the maps they produced

Another clear example of the necessity for more coordination in mapping out a field came a few years later, with MOOCs

Impressive feats of literature analysis and fact checking by individuals

Better tools for fact checking, and for reusing

One of the most impressive use-cases for Roam is Elizabeth Van Nostrand’s approach to extremely careful reading of history books. She showcased her approach during the San Francisco Roam meetup in January: She will take a history book, for example about the Roman empire, and try to extract every single claim that is made

ideally, once we’ve established this very peculiar fact (which is not likely to be featured in Wikipedia), ideally all future publications would directly cite this discussion

But if we have all these statements that we cannot be 100% sure about, can we really build any logical tree of inference on top of them?

Bayesian reasoning to the rescue...


Edited:    |       |    Search Twitter for discussion