An example of meta-execution

Imagine that Alice, Bob, Charlie, Denise… are a group of individuals, trying to work together to answer the query Q without having any of them consider something as complex as Q.

To start, Alice is given an abstract version of Q: “ the question represented by the sequence of words [x],” where x is a variable that can be unpacked if needed.

(If Alice were to unpack x, she would see something like “the list with first element [y] and following elements [z]”. But she doesn’t.)

Instead, Alice turns to Bob and asks: “Can you parse the question represented by the sequence of words [x]?”

Bob turns to Charlie and asks: “Can you give me the most likely parse trees of the sequence of words [x]?”

Charlie and his helpers are able to assemble the most likely parse trees, without any of them looking at more than half the sentence. Once he’s done, he hands the parse trees back to Bob.

Now Bob turns to Denise and asks “For each parse tree in the list [w], what question does it represent?”

Because of the recursive structure of parse trees, Denise and her helpers are able to build up the meaning recursively, without any of them needing to look at the full sentence. She hands back an answer to Bob: “It represents the question [do [a] and [b] satisfy relationship [c]?]”

Bob hands these candidate meanings to Erica and asks her to evaluate them for plausibility. Erica and her helpers do so, without any of them needing to look at a concrete representation of the full sentence. Bob ends up with the most likely meaning, and hands it back to Alice.

Now Alice has a representation of the meaning of Q: “That sequence of words represents the question [d].” She hands the question d to Frank, and asks him to answer it.

Now Frank is faced with a question like “Do [a] and [b] satisfy relationship [c]?” He asks George: “How do you tell if two items satisfy relationship [c]?”

Now George is faced with a question like “How do you tell if one thing causes another?” that doesn’t depend on all of the complexity of Q. So he can answer it, and give Frank back a flowchart [e].

With a sequence of steps in hand, Frank can ask Henry to perform the first step, Isabella to perform the second, Juan to perform the third, and so on. Some of these steps may be simpler than the original query, while others may need to be broken down further before we arrive at simpler steps.

Of course it’s hard for George to produce a fully-general procedure for determining whether one thing causes another. So Henry, Isabella and Juan are likely to need to ask some clarifying questions to George. But as long as those questions don’t tell George everything about the objects [a] and [b], then George’s query can still end up less complex than Q itself.

To illustrate how some of these steps might work:

  • Henry might be tasked with determining whether [a] occurred before or after [b], which he can break down into the simpler questions of “when did [a] occur?” and “when did [b] occur?”
  • Isabella might be tasked with thinking about how the world would be different if [a] had not occurred, and answering questions about this counterfactual. Of course each question about the counterfactual can be posed to a different individual. So as long as each question is not sufficient to determine [b], Isabella’s task is potentially simpler than Q.
  • Juan might be tasked with combining many different sources of evidence into a judgment about whether [a] caused [b]. As long as the answers to these questions aren’t actually sufficient to determine [a] and [b], then Juan’s task is potentially simpler than Q.

Once Frank knows the results of carrying out this sequence of steps, he can pass the answer back to Alice, who can then convert it back into a sequence of words and reply with the answer. Of course neither Frank nor Alice nor anyone else will see the concrete answer, they will only see abstract representations of it.