Belief Trees: Storing Reasoning and Crowdsourcing Truth

Noah Finberg
Considdr
Published in
4 min readNov 10, 2020

If you’re coming to this post without first reading the overview post: “Considdr Milestones”, I’d highly recommend starting there to gain appropriate context for what follows.

A theoretical introduction to Belief Trees and a walkthrough of Belief Tree building in the first MVP of Considdr in 2016 (first called “The Market of Ideas”).

Belief Trees

Belief Trees were envisioned as a way to build, store, visualize, and continually improve upon our beliefs as we process new information. So much of what we believe depends on what we can momentarily call to mind, but our memory is severely limited. We forget almost everything we learn as time passes.

Instead of maintaining well-considered, evidence based-beliefs, we tend to use unconscious emotional response as a short cut. The information we call to mind whenever we’re faced with reconstructing what we believe on-the-fly, is biased by how we feel. We’re more likely to call to mind confirmatory information; to discount information that doesn’t comport with how we already feel; and to blindly accept information that does. We are all emotionally motivated reasoners. See “The Rationalizing Voter” for a compelling take on this psychological human tendency.

Arguments and Evidence

How would we ideally update our beliefs when faced with new information? First, for any question we’d call to mind every reasonable argument bearing on it. We’d also call to mind all the evidence that may support or contradict each argument. We’d weigh all this information carefully against whatever new argument or evidence in presented to us. Then we’d carefully update how we would answer the relevant question(s) in the direction of the new information (if it still seems credible upon review). Fans of Bayesian Inference will recognize this formulation.

Unfortunately, our minds simply can’t process information like this ideal posits. But what if we were able keep track of all the important arguments and evidence relevant to any given question or topic without relying too much on our limited memory? What if we could evenhandedly integrate new arguments and evidence into our beliefs as they become available?

The earliest version of a Belief Tree. For more on how the initial MVP evolved, see Considdr: A Social Reasoning Platform

The Implicit Structure of Documents

The information we consume — television segments, news articles, podcasts, books, research papers — isn’t typically structured in a format that lends itself to comparing arguments and weighing evidence across sources. The key questions addressed or arguments and evidence put forward in any given source of information are often obscured as the author moves from their initial research to their final product.

As an example, think about the earliest stages of the research process. First, the author has to figure out what questions they want to address with their work. Then they must go out and compile all the relevant arguments bearing on that question — along with any evidence relevant to those arguments. Before writing anything, a good researcher often maps out all of this information in an outline. Here is literally the first image on Google Images for a research outline.

https://www.pinterest.com/pin/686728643160134266/

This is what I refer to as the “implicit structure of documents.” What questions are being addressed? What arguments are put forward? What evidence backs those arguments up? When an author converts this outline (explicit structure) into free form content — like an article, news segment, podcast, etc.— they obscure the key questions, arguments, and evidence that make up the substance of the work. The explicit structure becomes implicit.

The end-product is easier to passively consume, but it becomes much harder to integrate arguments and evidence into a well-balanced belief. For the most part, when we watch a documentary we aren’t carefully balancing arguments and evidence against our past beliefs, but instead are experiencing the information. There is a ton of value in the experience of reading, watching, listening to finished products, but if we could also uncover a source’s implicit structure, then we would have a modular way to evenhandedly compare, contrast, and integrate new information into our Belief Trees.

Crowdsourcing Insight

On Considdr, pulling out key questions, arguments, and evidence from any source of information has been variously called “building question-argument-evidence (QAE) units”; “creating idea summaries”; “taking notes”; and “extracting insights.” In some sense, it’s really all about reusing much of the work already done in the research process itself.

If we can turn any source of information into easily consumable blocks that contain the questions a work addresses, arguments it advances, and evidence it puts forth, we can build Belief Trees from not only QAE units we create, but from any QAE that others create as well. The earliest version of Considdr relied on users to crowdsource this work (the most time intensive part of Belief Tree creation) so that it’s possible to build Belief Trees from arguments and evidence even if you haven’t had time to consume the entire information product.

In future versions of Considdr, we figured out a way to automate “insight extraction” with an approach we call “summarization by adjacent document” (detailed in a subsequent post).

The earliest version of “extracting insight” in the form of “Question-Argument-Evidence” units. In this video they are called “ideas” or “idea summaries.”

Belief Updating and History

One core benefit of building Belief Trees, is that they are easy to update and improve with new information. When new arguments and evidence are created on Considdr, if a user hasn’t considered them before, we can intelligently suggest that they reopen their Belief Tree. Over time you can see how your reasoning and beliefs have evolved when faced with new information. This intelligent suggestion and search approach, which we call “Logical Aggregation”, is detailed further in a subsequent post.

--

--