Let Nature Choose Your Deadlines

How Focusing on Structure Beats Focusing on Time

Sean McClure
NonTrivial
19 min readMay 11, 2023

--

For the podcast episode on this topic check out the NonTrivial Podcast.

Life and Deadlines

Life presents us with a host of challenges for which we must find solutions. Finding solutions is less about ideas and more about discovering ways to build physical things that produce useful outputs. We might be constructing a software product to balance budgets, crafting a social program for helping the homeless or creating a painting to evoke a certain feeling. Whatever it is, we build things that produce outputs to solve problems.

Building things means getting things done on time. We cannot merely play in the sandbox until something useful pops out. There are expectations placed on us both by ourselves and by those we work for. Time constraints are a core part of our projects because they demarcate our efforts against our other responsibilities. Deadlines give our coworkers notice as to when our contribution will be delivered, and those who consume what we create can gauge when the next release will be.

Deadlines allow our life to be visualized on a timeline, giving us a sense of control and progress. Deadlines make it easier to have goals and to orient our work around bite-sized pieces. By envisioning the difference between now and a deadline we can manage and track our efforts, comparing those results to our performance in the past. And of coarse such practice works well for the division of labor we use to accomplish projects at scale.

Not So Natural

But deadlines are a product of modern life, not some phenomenon that occurs naturally. Our ancestors likely had a very different perception of time, unencumbered by the preoccupation of how long something takes to complete. Prior to the invention of the mechanical clock our tasks were not measured in juxtaposition to the time of day. Our ancestors undoubtedly took on tasks by embracing their energy as it came.

This kind of natural cadence is felt by all of us, especially when we run up against deadlines only to find ourselves scrambling. While our modern world has pathologized procrastination, leaving work until the last moment is often instinctive. In any case, projects that go over-time and over-budget are hardly the exception.

Divvying up one’s task according to time goes against our natural rhythm. There is an evolutionary reason we “postpone” and “stall.” We did not evolve to follow contrived structures based on idealized narratives about how projects should be realized. Life will always be far more complex than our control-seeking narratives suggest. There are way too many unknowns to plan the course of one’s direction, let alone the effort required to get there.

While people tend to get enamoured with the idea of discipline, the reality is our emotions are not something to be fought. Emotions operate in the high-dimensional space that our structured lives cannot. The inability of emotions to keep pace with some structured timeline is not the fault of biology, it is the naïveté of the modern narrative. We cannot be expected to be “in the mood” on schedule. There is a tempo to how we create things; one that cannot be outsmarted by the modern obsession with agenda.

Nature Chooses our Deadlines

Deadlines are both needed and unnatural, meaning we have to accept that things are required by a certain date, but also that there is something wholly unnatural about the deadlines we create. It would seem these opposing realities need to be reconciled.

We could keep redefining what “done” looks like until it appears doable, as is common in industry. Rather than creating our vision in full we can control the “scope” and limit what gets included in our list of needs. We can define so-called minimum viable products (MVPs) that allow just enough features to attract early consumers and/or validate our ideas.

But there is no way to know what it’s going to take to build something worthwhile. The pieces that are required, and how they need to work together, will be ready when they’re ready. Removing pieces for the sake of “scope” or “viability” likely limits the ability of our solution to converge, because the pieces we remove may be critical to solution viability. There is simply no way to know.

People want to think of their creations as though they follow deterministic rules, like cogs and pistons producing well-defined outputs. But this is counter to how solutions that solve hard problems work. In reality we cannot access, nor manage, the kind of information that allows for such intricate coordination. The causal opacity inherent to complex situations precludes the possibility of knowing how to construct something prior to it being revealed through trial. Good solutions precipitate out from our myriad efforts to solve problems; they are not inevitable sums of their parts.

We can only react to signals, and move accordingly, knowing that eventually the right solution will emerge. In short, nature chooses our deadlines, which suggests there isn’t much we can do to ensure that our work can still be high quality on an artificial deadline. But if we look into the mechanism behind what makes a good solution, there just might be something we can control after all, allowing us to meet our modern deadlines with naturally good solutions.

Solutions as Peaks in Complexity

We can think of solutions that work well as resolutions to optimization problems. In optimization theory we typically think of problem-solving, at least conceptually, as a ball rolling on a surface of hills and valleys. The hills and valleys are the amount of “error” (or “energy”) the current solution has, meaning the lower the ball sits on the surface the better the solution. Think of a bad solution has having more error, thus sitting atop a hill, and a better solution having less error thus sitting lower in a valley. Obviously the “error” has to be defined according to the kind of problem we are working on. The goal of any optimization problem is to find the lowest possible point on the error surface, satisfying some reasonable number of constraints.

Figure 1 Building solutions to problems can be thought of as an optimization problem.

In the valleys of our error surface lie the configurations that work; physical arrangements of pieces and their interactions that solve the problem of interest, as depicted in Figure 1.

But optimization doesn’t tell us anything about what those physical configurations are supposed to look like. Even if we can somehow measure the current amount of error in our solution we wouldn’t know why it has that error. Optimization can try to use information to guess the next-best-move, but by-and-large it arrives at a solution thanks to time and happenstance (i.e. randomly landing in a deeper valley).

This is all the more true when we build things that are truly complex, which these days is becoming par for the course. There is an increasing level of “blackboxness” to our creations because they contain so many moving parts. Their error “surfaces”, which in reality are extremely high-dimensional spaces, contain the solution inside a fantastically and impossible-to-visualize region of possibility.

Embracing randomness and naïveté is critical to creating innovative things because we must explore a great deal of uncertain terrain in order to exploit things that work. Without any kind of insight into what pieces will be needed and how those pieces should be connected our efforts at building complex things are left largely to the whim of chance. We move, observe and react according to what happens as we stumble upon profitable information revealed by nature.

But this all takes time, and when it comes to the manufactured deadlines we live by we cannot just wait for a solution to converge. We must take what we have at the cutoff date.

But it turns out we can say something about the physical structure of good solutions, even ones that are novel. Any solution to a hard problem must have a structure that is neither too simple nor too random. We know overly simple solutions don’t resolve genuinely hard problems. One only has to look at nature in all its complexity to comprehend that hard problems demand many pieces and intricate interactions. Complexity has a number of signature properties that enable complex solutions to solve hard problems. These are things like nonlinearity, collective dynamics and hierarchy. We also know that complete randomness contains no useful information or practical machinery since randomness isn’t patterned. This means that complexity, as a concept, holds a critical clue regarding the physical nature of solutions that work.

Specifically, we should expect any good solution that we build to exist inside a sweet spot between simple structure and randomness, where complexity is high, at a point where the properties of complex things become apparent. This is the physical aspect of nature’s deadlines. Nature’s deadlines, defined here as the points in time when enough of the right pieces and their interactions come together to solve a hard problem, must exist when our physical solution has a requisite level of complexity to solve the problem of interest.

We can visualize the sweet spot of complexity using a “simpler” system that is still complex, and evolves dynamically. The following video shows ink diffusing in water. We can see that the ink initially starts out as a tight bundle of swirling matter, which soon starts to spread out, inviting-in an increasing number of twists and twirls as it interacts with its environment. During the early stages the ink takes on increasing levels of complexity, which we can perceive, due to the presence of intricate folds, deep shadows and other nontrivial structure. Eventually the ink continues to diffuse until a featureless mass pervades the water uniformly.

Figure 2 Video of ink diffusing in water, as a showcase of the evolution in structural complexity over time.

At some point in time that evolving swirl of complexity is a configuration of matter that is arguably the most complex, relative to any other configuration this system reaches. What problem ink in water is or could be solving is debatable, but it must be most capable at its highest point in complexity.

This is because hard problems are high-dimensional, meaning they have many aspects that must be accounted for. Think about trying to create a piece of software that recognizes speech or faces. Programming explicit rules has never worked for these kinds of problems because there are too many aspects to account for (we can never know all the rules). What is needed as output from our solution must emerge from a fantastically intricate array of pieces and connections. The solution to any hard problem requires its own immense level of complexity to match the kind of complexity seen in the problem.

Of course there is little reason to believe we can predict when this sweet spot of maximum complexity will happen, at least with any decent level of accuracy. Complex systems evolve post-chaos, and chaos already limits our prediction horizon despite its inherent determinism.

But what would be useful is knowing if the sweet spot is something smeared out over a long period of time, or something that lives for a fleeting moment. If the former, it would mean good solutions exists across a wide spectrum of possible configurations. And this means even if we undershoot or overshoot nature’s deadline we might still produce a worthwhile solution. But if the latter, where the sweet spot lives for a fleeting moment, it means any solution we cutoff at a manufactured deadline is likely to be far from optimal (unlikely to line-up with nature’s deadline).

To get some insight in this direction we can attempt to quantify complexity, such that we can calculate it for systems that evolve in time. That way, we could try and capture the moment when complexity reaches a maximum in dynamic systems and see if such a realization appeared gradually or suddenly.

There are many ways we might attempt to quantify complexity. There is computational complexity (both space and time), logical depth, thermodynamic depth, effective complexity, fractal dimension, excess entropy, correlation, stored information etc.

Perhaps the most common is a concept known as algorithmic complexity (also called Kolmogorov complexity). Algorithmic complexity defines the amount of complexity in an object in terms of the smallest program that could describe it. Using the example from Wikipedia, compare these 2 strings:

  • abababababababababababababababab
  • 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7

Both are the same length; 33 characters. But the first string is less complex than the second string because the first string can be summarized as “repeat AB 16 times” (only 18 characters), whereas the second string cannot be described with as short a description (its program must be larger). Another way to think about this notion of complexity is in terms of how much compression is possible. Note that the second strong cannot be compressed as much as the first string, if at all.

We can also see that the first string has what we would call a “pattern” to it, whereas the second string appears patternless (i.e. more random). This correspondence between complexity and pattern is important.

The problem with using algorithmic complexity, or many of the other definitions of complexity for that matter, is that it assumes complexity increases with randomness. While this is true initially we know that complexity has a sweet spot, as per the ink video above. It makes no sense to call a completely randomized dispersion of ink more complex than the earlier stage where there are intricate folds and patterns. So algorithmic complexity is only useful as a starting point, but fails to capture the true evolution of complexity in dynamic systems.

To quantify complexity for our purpose, which is to gain insight into the structural aspects of good solutions, we need a definition of complexity that aligns with how we perceive complex patterns; an approach that aligns with what we know, intuitively, to be true. We need a complexity measure that takes into account complex structure, like the presence of hierarchies and the nesting of patterns; things we frequently see in nature.

An approach suggested by Bagrov et al in their paper Multiscale structural complexity of natural patterns¹ provides a possible approach. Their method assumes complexity can be understood as a kind of “self-dissimilarity.” The idea is that more complex things should exhibit additional information as we change the scale by which we view it (see references 2–4 for the origins of this approach).

Think about viewing the blood cells of a tiger, then stepping back and viewing the entire tiger. These 2 scales don’t look anything like each other. Viewing the whole tiger obviously brings additional information not present at the level of blood cells. Compare this to ink in water. Up-close ink would still look a lot like far-away ink; it would have swirls and folds. More complex things should see greater differences between their scales.

Note: The presence of self-dissimilarity doesn’t negate the role of fractals in complexity, which are self-similar. There is both self-similarity and self-dissimilarity in complex things, and the one we see depends on context. We are focusing on self-dissimilarity in this article.

To calculate a so-called structural complexity Bagrov and team looked at what happens to observables when the scale of the system is changed via renormalization group transformations. Doing so produces a so-called RG flow profile⁵, which allows for a quantitative definition of structural complexity.

We can think of renormalization in terms of coarse-graining, where we take coarser and coarser versions of our system. Coarse-graining is a common approach to simulating the behavior of complex systems when we cannot account for all the details (too computationally demanding). By coarse-graining we use a simplified representation of the system to more readily calculate its properties. Coarse-grained models are widely used in applications like molecular modeling:

Figure 3 Using coarse graining to create a simplified, more tractable model of a complex system.

We can do the same thing with images. Take a picture of some object or system, apply a renormalization scheme (coarse-grain the image), and use the difference between the fine-grained and coarse-grained versions of the image to measure a kind of self-dissimilarity, and thus overall structural complexity.

Bagrov and his team did this with snapshot images taken from a video of ink in water. It allowed them to calculate the structural complexity of a system evolving in time, leading to a plot similar to Figure 5¹. We can see that the multiscale structural complexity of the system occurs as a peak.

Figure 5 Conceptual depiction of the shape of the structural complexity curve calculated for dynamic systems when using a renormalization scheme, as done explicitly in reference 1.

Let’s remind ourselves of why we’re interested in the shape of quantified complexity in dynamic systems. I mentioned earlier that the solution to a hard problem can be expected to exist, physically, as some configuration of high complexity; a sweet spot that is neither trivially simple nor completely random. The reason I am focusing on a definition of complexity that uses a renormalization scheme is that it is structural in nature; it is a complexity measure that says something about the physical structure we expect to exist in complex things (e.g. the presence of hierarchies, increasing levels of aggregation at each level, etc.) rather than merely a degree of complexity.

Recall that under the algorithmic definition of complexity, higher complexity corresponds to an increase in the size of the program needed to summarize the object, and as mentioned above, this definition of complexity fails to account for the sweet spot we know exists. However the notion of “program size” is important, because whatever physical structure exists as the solution is one that must be computing the output needed to solve a hard problem. What would be a better definition of complexity for our purpose would be one that combines the concept of “program size” from algorithmic complexity with the concept of structural complexity already discussed.

I will define a so-called structural-algorithmic complexity (SAC), which conceptually means a system’s peak in structural complexity occurs where its program (physical arrangement that computes the output) is most complex.

Since there is no formal way to calculate algorithmic complexity (it’s more of a concept than a calculation) I won’t bother formalizing SAC mathematically (maybe some day). For now, just consider the best possible (under all its constraints) solution to exist at a point in time when the system we are building reaches its maximum SAC.

Bringing all this back to nature’s deadlines, we can say that solutions happen at specific points in time because that is when the structural complexity of our system reaches the point where it can compute the outputs needed to solve its hard problem. The fact that this complexity forms a well-defined peak also means there isn’t much wiggle room for arriving at the right structure. Our efforts either produce the right configuration or they produce something quite inferior.

The following figure shows the idea of our efforts undershooting and overshooting nature’s deadline. In many cases, we are either releasing something too early or too late, with respect to the right solution.

Figure 6 Physical solutions existing at different parts of the structural complexity curve.

Undershooting is the scenario where we haven’t allowed our solution to gestate and settle into something that truly works; to the left of the red line in Figure 6. Our rush to get something out the door leads to poor decisions in terms of what pieces to include and how to connect them. Our solution hasn’t taken on the needed level of structural complexity, and fails to deliver the outputs needed.

Overshooting is the scenario where we are adding more pieces than necessary, forcing our creation into a more random, but less structured, state. This can be caused by second-guessing our intuitive choices, or our project having “too many cooks in the kitchen.” In this scenario we have overshot nature’s natural settling point, where convergence precipitates an effective solution.

Undershooting and overshooting nature’s deadline is to be expected, absent any way to predict when the right solution will precipitate. Every project has its own unique set of dependent pieces and interactions, meaning different projects will have very different deadlines. There is simply no way to know when our project’s natural deadline will occur, and any solution we build that straddles the peak is likely to be far from optimal, given the peak’s precipitous drop off.

The situation appears rather dire, and yet nature is solving the most difficult problems all the time. Nature has undoubtedly figured out how to regularly produce solutions that sit at the peak of its own deadlines. The environment changes all the time, with different stressors presented to species that must adapt. Nature doesn’t undershoot and overshoot its mark because if it did it wouldn’t solve the hardest problems, which it does, regularly.

There must be a solution, a meta solution, to our ultimate problem of aligning our efforts to nature’s deadlines.

Structure and Staggering

Nature’s deadlines are moments in time where our solution’s structural complexity reaches its maximum. We cannot “shift the peak” of nature’s deadlines. Nature needs what she needs, and it will happen when it happens.

But that doesn’t mean we cannot enter the peak sooner. By “enter the peak sooner” I mean we can immediately create something that includes parts and interactions that must be present in the final solution. But how is this possible if we can’t know what the final configuration looks like?

Recall in my last essay/episode⁶ where I talked about how we create categories in order to make sense of our world. We do this because there are far too many details that permeate our reality for our minds to make sense of. The human mind is able to navigate our complex world because it can create high-level categories for all those details. These categories are the most invariant parts of the systems we encounter.

The human brain is our best example of a solution to the hardest problem of all; general intelligence. The human brain is also the most structurally complex system we know of, and by no coincidence solves this and its countless related problems by creating categories; by spotting the most invariant aspects of its environment and using those as anchors with which to base our decisions and understanding.

The way we can take definite correct moves at the onset of any project is by focusing on the most abstract aspects of the challenge, and ensuring details are introduced only when absolutely needed.

To see what I mean, observe the artist in the following video:

Using pencil sketching as an example of focusing on high-level structure before deepening the realism in order to make problems tractable. Video by Ivan Samkov

Notice the artist’s movements. Rather than working away at the details, the artist is lightly touching the canvas, laying down only the most abstract outline of the desired result. The efforts are the least-detailed possible, attempting to capture the most critical aspects of the object; the essence of the solution.

The importance of capturing only the most abstract aspects of the things we create is a fundamentally important part of building good solutions. To only deal with the most abstract parts is to focus on that which is most invariant, and what is most invariant must be true.

We can know the must-haves from the beginning, because they are the most invariant aspects of anything we create. If we swap one or more characters in a story we are writing there is still a connective tissue that never changes. If we alter features in our software there are still parts that will never be revised. There are countless ways we might add details to a person’s eye when sketching their face, but the eye must still look like an eye.

Just as the artist slowly builds upon the most invariant and abstract aspects of the solution, so must we only lay down the most high-level attempts, adding details only as needed. The following figure depicts the gradual increase in detail.

Figure 7 Gradually adding details over time, keeping our priorities aligned to the most invariant aspects of any problem.

By ensuring our efforts are as abstract, and thus invariant, as possible at any given time we allow ourselves to “enter the peak” earlier. It is the peak we enter, rather than some area to the left or right, because the most invariant parts are the beginnings of the structural complexity that must be there; they must be aspects of the correct physical configuration.

Figure 8 Entering the peak immediately, despite knowing almost nothing about what the final physical configuration will look like.

This approach is agnostic to the artificial deadlines we create in modern life, because no matter when those occur, whatever we release at the artificial deadline will contain parts that work. Sure, the prematurity of an artificial deadline means some good details will be missing, and the tardiness of an artificial deadline means our solution will contain many unnecessary parts. But there will be a great deal of value delivered, because our solution will contain pieces of the structure expected by nature.

And finally, what about projects that are free from artificial deadlines? What about the entrepreneur who is slowly building a product or service, or a writer in no rush to produce their masterpiece? What about the sculptor or painter who is building to learn, more than to produce things for others to consume? These are much more natural settings for creativity, and something we should all strive for. To create as our ancestors must have created; unencumbered by the preoccupation of how long something takes. Taking on tasks, and our motivation to do them, as they come. It is this kind of behavior that allows our best possible solutions to emerge because they get us as close as possible to nature’s deadline.

But even operating under this idealized behavior still has us wanting to produce work at a decent frequency. Contentment comes from bringing our best selves to the world. We want people to see what we create, and to expect some level of regularity.

I recommend staggering our work. In this scenario we work on a number of things in parallel, not knowing which one of our many projects will be released next. Imagine working on a podcast, with many ideas in play. Rather than constructing some specific schedule, whereby our next episode must be the one we’re currently working on, our next episode will be whatever converges at that time. We naturally have many ideas and possibilities, and we can bounce among them, adding only the invariant structure as we go. Naturally, there will be an episode that has germinated the most, for reasons that are unknown, and that don’t matter, at the time we wish to release the next episode.

Even larger corporate projects can benefit from working on all things in parallel, rather than assuming some artificial timeline of dependent tasks. Let nature bring forth what precipitates at its essential time. If we are building in an invariant fashion, we will produce solutions that work.

In any case, let nature choose your deadlines. Focusing on structure beats focusing on time.

References

  1. Multiscale structural complexity of natural patterns
  2. Using self-dissimilarity to quantify complexity
  3. Self-Dissimilarity: An Empirical Measure of Complexity
  4. Complexity as thermodynamic depth
  5. Multiple scales and phases in discrete chains with application to folded proteins
  6. Thinking Before Eating: Developing a More Rigorous Heuristic

--

--

Sean McClure
NonTrivial

Independent Scholar; Author of Discovered, Not Designed; Ph.D. Computational Chem; Builder of things; I study and write about science, philosophy, complexity.