Logical Aggregation: Leveraging the Implicit Structure in Documents for a New Kind of Search Engine

Noah Finberg
Dec 1, 2020 · 12 min read

If you’re coming to this post without first reading the overview post: “Considdr Milestones”, I’d highly recommend starting there to gain appropriate context for what follows.

Intro to Logical Aggregation

Logical Aggregation is a new approach to organizing information by expressing that information in terms of the questions it may address, the arguments it puts forth, and the evidence it uses to back up those arguments. It figures out which arguments and evidence are most relevant to any given question based on the meta-data produced by the Belief Tree building process.

Expressing all information this way — in terms of questions, arguments, and evidence — has a key advantage during search. With this structure, rather than return just a list of links to articles, the Considdr search engine can return the key arguments and evidence across many documents that address your search query without requiring you to read the potentially voluminous number of pages from which they came.

The below discussion on the structure of information is similar to that in Belief Trees: Storing Reasoning and Crowdsourcing Truth. For additional context, I’d recommend reading that article first as understanding Logical Aggregation depends on understanding Belief Trees.

The Implicit Structure of Documents — Questions, Arguments, and Evidence

Before explaining the general approach of the Logical Aggregation algorithm, it’s important to take a step back and think about how we typically consume information.

We might choose to listen to a podcast, watch a television segment, or read this article for instance. All of these activities usually have at least one thing in common: passivity. Unless we’re rigorously taking notes, we’re simply letting the information wash over us. That means we’re forming our beliefs passively: we’ll internalize information that confirms our existing biases and discount that which does not. Who could really blame us? Building our beliefs more actively, more deliberately, is a ton of work.

Because passivity is the default setting for how we consume — most, if not all — of the information that we ever see, it makes sense that the purveyors of that information would optimize their end products for passive consumption. The causal arrow probably runs the opposite direction too: because information is often structured for passive consumption, we tend to consume it passively. This dynamic is interesting given that most good-faith research work requires a very active, deliberative process to produce.

Imagine a scholar writing an article in order to answer an important question. One of the first things they likely do is survey all of the different arguments that bear on the question at hand and gather as much evidence as possible in order to evaluate the credibility and significance of those arguments. Then they formulate their own synthesis of the arguments and evidence that will become their thesis. In other words, they create an outline for their paper. This outline represents the implicit logical structure of what will eventually become their paper.

Going from outline to paper certainly adds color, context, and even depth to the work, but often it also obscures the work’s underlying logical structure. Don’t get me wrong. It’s much more enjoyable to read a good book, listen to an engaging podcast, or watch a dramatic television segment — and we do learn a lot from these forms of content. Unfortunately, there is also a cost to an entertaining final product.

Staring at an outline definitely doesn’t have the same appeal. But there is a beauty in its clarity and a power in its modularity. With information restructured in terms of these essential units, we can more easily compare and contrast many works. By breaking a paper down to it’s fundamental logical form, we can reuse some or all of its question-argument-evidence units. (Henceforth QAE units). The questions an author asks, the arguments they make, and evidence they present are the building blocks of informed belief.

Revealing a work’s logical structure once it’s implicit presents a challenge. It’d be great if every author included the key QAE units as an addendum with their final product. However, short of the author’s interpretation of their own QAEs — summaries from others are the next best thing. The first version of Considdr — details further in “Considdr: A Social Reasoning Platform” — relied on students to crowdsource QAEs as they consumed information. Using NLP, the latest version of Considdr figured out an automated way to extract at least the A and E components of QAE. See “Summarization by Adjacent Document” for more on this approach.

Building a Search Engine with Logical Aggregation

Ok, so now on to the Logical Aggregation Algorithm itself. This description will be mostly high level and intuitive. For a more in depth take on Logical Aggregation, see our first issued patent or our second pending one for improvements envisioned to the first.

Logical aggregation is a standard collaborative filtering recommendation engine that leverages the implicit structure of documents and the Belief Tree building process to build relationships between the components of QAEs. This means that for any given query the goal of our algorithm isn’t just to find a set of links to relevant pages, but instead to return a wide array of arguments and evidence units based on their logical relevance to my query. In other words, Logical Aggregation answers the question, what are the most important arguments and evidence I should be considering in evaluating any given question?

Metrics of Relevance

How does the algorithm figure out the most logically relevant arguments and evidence to return? What does it mean for any argument to be logically relevant to a question?

First, it’s helpful to think about how Google might evaluate the success of the results they return. Imagine I ask a question and the search engine delivers ten links to pages that could be helpful to me as I try to answer it. In addition to their famous PageRank algorithm, the search engine might track what links I click on or even how much time I spend on the page once I do.

But what information are these actions actually signaling? They signal that based on title of the page and the snippet that Google provided, I thought that there may be information in the linked article that could help me answer my question. If I spend a lot of time on that page, maybe that means the information is very valuable; or it could mean I took a long time to extract the important information therein; or maybe I couldn’t find anything that’s valuable at all. We’ve all had the experience of clicking in and out of links on the first page of search results to try to get what we’re looking for.

No doubt clicks, time-spent-on-page, and the 1000s of other things Google knows about your behavior and search history is valuable in returning relevant results. They unquestionably are and probably always will be the very best for the vast majority of search use-cases. I’d be foolish to assert otherwise. Logical Aggregation is designed for a relatively small, limited subset of search use-cases. I address this issue further at the end of this post.

The problem with the above search metrics is just that: they are only search metrics. They don’t follow me all the way through my information gathering, synthesis, and belief creation processes. How could they? Much of that is happening in a separate note document in the best case and exclusively in my mind in the worst (probably good Google can’t actually read your mind…yet).

Search metrics are indirect indicators. Unless they’re focused on the snippet result (this might increasingly be the case), then they are mostly mapping relevance between an entire document and a question. They don’t know which specific pieces of information actually ended up being directly relevant to the question at hand?

Enter Belief Tree creation. Beyond making it easier to construct well-balanced, evidence-based beliefs, the Belief Tree creation process aimed to capture as much of the information gathering, synthesis, and belief creation process as possibly. It’s work many of us already do — and which pieces of information we gather; how we synthesize them; and what that entails for our beliefs provides many more direct indicators of search result success.

Ok so what does Logical Aggregation track then? In short, it tracks the Belief Tree creation process meta-data. For illustration, here’s an example of this process. I care deeply about the state of American politics — and particularly the extent of polarization and disinformation in our discourse. I’d like to build a well-balanced, evidence-based belief on the subject. This is the process for doing that:

  1. I create some initial questions that I hope to answer in my Belief Tree. Including, “why does it seem Americans are so politically polarized?”
  2. I give Considdr a query. At first it could be as simple as “political polarization.”
  3. Considdr returns a list of arguments and evidence from various sources related to my query. Considdr has no notion of logical relevance yet unless others have already built Belief Trees in this subject area.
  4. I scan the list of arguments and evidence units and “consider” (aha!) those that I find relevant to answer one of the questions I posed in my Belief Tree and I “ignore” those that don’t. We often joked it could be like “dating” our own beliefs — careful vetting and lots of trial and error to find a good match…“swipe right to consider.”
  5. I group similar arguments together within each question of the Belief Tree. Some argument units are making essentially the same claim and evidence originally under one may be supportive of the other as well.
  6. Finally, I can write down the current iteration of my belief on each question at hand as I try to reconcile the different arguments and evidence I’ve considered.
  7. Repeat as I find new arguments and evidence to consider or the platform uses logical aggregation to find argument and evidence logically related to my existing Belief Trees.

Think about all the information we now have on various Q, A, and E units. We can track which arguments are now directly relevant to a given question. Argument’s considered under questions in Belief Trees would then be good candidates to return in search to other considering similar questions — for the first time (search) or even in their existing Belief Trees(suggestion).

Direct relevance is a great signal . By integrating it into my Belief Tree, I’ve literally told Considdr that a specific piece of information was worth considering to answer a particular question. You might be asking yourself, didn’t he mention logical relevance like ten paragraphs ago? Well here’s coolest part of the algorithm — at least I think so.

If we were to just use direct relevance to suggest arguments for particular questions, we’d need to rely on many users to have already considered most of the relevant arguments under that same question in their Belief Tree. That’s where the concept of logical similarity comes in.

The Belief Tree process doesn’t just establish a connect between the arguments and evidence considered to the question asked (direct relevance). It establishes connections between questions and questions; arguments and arguments; evidence and evidence; arguments to similar questions; similar questions to arguments…etc. Leveraging the logical structure of QAEs creates an entire logical network of sorts. Here’s a very crude drawing from the initial Considdr ideation days in 2014:

This very crude drawing displays how logical similarity works. Questions are logically similar to each other if they share a direct relationship to the same or an indirect relationship to a similar set of arguments.

This kind of network produces at least the following logical similarly metrics. In our second patent (pending) we detail how to use Kullback-Leibler Divergence also called relative entropy to calculate these.

  • Questions are logically similar to each other if they share a direct relationship to the same or an indirect relationship to a similar set of arguments.
  • Arguments are similar to each other if they’ve been grouped together in a Belief Tree or if they share a similar set of questions.

Once we have logical similarly, we can get logical relevance. Questions not only can return a set of arguments that have been directly considered underneath them, but also the set of arguments for similar questions weighted in relevance by the extant of the question similarity. Arguments in the results can be grouped by logical similarity. As a result, Logical Aggregation delivers an ideally diverse set of logically (and directly) relevant arguments and evidence for any given question.

An early search result example. This version of Considdr relied on students to crowdsourced “notes” aka “QAE units.” We ultimately figured out a way to automate this process so that we could generate many more AE units and summaries of the arguments and evidence in an article became much more reliable. Again see my post on “Summarization by Adjacent Document” for more info on that.

If it’s such a great approach, why doesn’t Google just do this?

A natural question that always comes out of this discussion is: why doesn’t Google just use this approach if it’s so good. One answer is well, maybe the approach isn’t actually that good…we couldn’t turn our technology into a viable business after all. I detail Considdr and Logical Aggregation’s significant limitations in the final section below.

But first, here are a few reasons why this approach may not make sense from Google’s perspective:

  1. It’s only applicable to a relatively small subset of queries.

Most internet searches today are not aimed at forming the most well-rounded, evidenced-based beliefs possible for questions with potentially conflicting, nuanced, unclear answers. Most people use search to find quick, clear, factual answers: a product, a movie, the age of the president, how far away their school is, a restaurant, a particular website,…etc.

2. Those queries aren’t nearly as easy to monetize

Google generates revenue primarily from ads. They can generate the highest ROI for an ad when users are looking for products or services not arguments or evidence.

3. Logical Aggregation requires restructuring all information into QAE

Google indexes entire documents and is built around it’s very successful ability to do so. Moving to a QAE structure is not intuitive for most people and would require a massive shift in approach.

If they wanted to, Google has the resources, engineering talent, and data to design and implement a version of Logical Aggregation much better and much more effectively that I or any tiny startup ever could.

Dear Google,

If someone at Google is reading this and thinks my assumptions in 1, 2, and 3 above may be incorrect, please feel free to try this or a similar approach. You’d have the best shot at realizing the mission of making it easier for anyone to build balanced, evidence-based beliefs. Deep down, I’ve was never fundamentally motivated by the potential financial payoff of Considdr (probably a large part of why I failed to create a viable business). I just wanted positive social impact. I know that I ultimately failed to have that impact and that’s why closing down after six years of work has been personally difficult.

So Google, or anyone really, feel free to take whatever you’d like. I’d be happy to discuss (even though you likely wouldn’t need me). I’ll share any information you’d like to know if it means any future impact of all the work we did.

I’ve had time to do a lot of reflecting. I’ll probably write a post at some point sharing my personal experience with journey and the company failure. Here, however, I want to briefly focus on where our search engine and Logical Aggregation specifically fell short.

  1. First, and most obviously converting the world’s information into QAE is a massive endeavor. Many warned me at the beginning of this project that that would be one of the biggest hurdles. They were correct. We had very limited resources. We were able to index millions of AE units (Q part would get added through Belief Tree construction), but even that volume pales in comparison to the trillions of pages modern search engines crawl.
  2. As a result of (1), When our customers would search Considdr for something, they naturally expected results equivalent to or better than what they would get from the same search in Google. Sometimes we were able to provide a meaningful improvement by returning many quality AEs for specific searches, but our success was inconsistent and limited.
  3. QAE can be a restrictive format to force results into. Maybe of our customers might have wanted to know who their largest competitors in the market were. Considdr could return lots of evidence for market size, growth, trends, but not a simple list of competitors. Google obviously could.
  4. Finally, there are the standard problems associated with any collaborative filtering recommendation engine: cold-start and sparsity in particular. Cold Start: because building logical connections between all of the QAE component parts relied on users to build Belief Trees, there weren’t any logical connections built to begin with. Our search relied on more traditional metrics (e.g. keyword similarity) while we waited for logical connections to accumulate. We never got the volume of usage that would get us anywhere close to leveraging Logical Aggregation in a truly meaningful way. Sparsity: We never were able to focus effectively on one specific subject area (a huge mistake) because we constantly struggled to define our target market. As a result, our data was spread over thousands of topics and hundreds of thousands of subtopics. This meant that even though the number of connections produced by Belief Tree building were too dispersed to build reliable or comprehensive similar question and similar argument scores.

Given our resources and relatively new exposure to the search/NLP/ML space, we were able to build a tremendous product that functioned in a production environment. I’m proud of my team’s work despite our shortcomings. We learned so, so much and moved in the direction of our ultimate goal, but we never realized the potential of Logical Aggregation.

Considdr

Search Less. Consider More.

Considdr

Considdr (2014–2020) was a search engine that reimagined how we build, store, and update our beliefs so that they are more evidenced-based. We fundamentally rethought how best to aggregate and distill the world’s knowledge. This blog is where we share our breakthroughs.

Noah Finberg

Written by

Considdr

Considdr (2014–2020) was a search engine that reimagined how we build, store, and update our beliefs so that they are more evidenced-based. We fundamentally rethought how best to aggregate and distill the world’s knowledge. This blog is where we share our breakthroughs.