Information Flows Designed for Emergent Complexity
In complex environments, teams are built for emergent outcomes. This means they need to be optimized for coherence. Coherence refers to a particular dispositional state of a team, where the complex feed loops between trust, power and action thresholds are optimized for flow (see A Source Code for Team Flow). The degree of power asymmetry in the system is a crucial function of team flow. Power asymmetry in turn is a function of information flows and can be regulated by designing resilience through sensemaking up-hierarchies.
I introduced the notion of a sensemaking up-hierarchy in my previous article. In this article I want to give some brief descriptions of what a sensemaking up-hierarchy is, and how it serves team action potential.
There are two key advantages that AI gives smart machines like self-driving cars and AI-augmented diagnostic devices: 1) Global updates and 2) Self-organized information.
Global updates means that when one of the agents (nodes, machines) learns something new, every member of the network can be updated with the new information; and every new member of the network arrives fully “up to speed.” Humans, on the other hand, learn individually, mostly through trial, error and on-going experience. Unlike AI, and similar to all other living agents, human learning is not rule-bound, but based on protocols and affordances. The dance of protocols-and-affordances is what we call “experience” in the human endeavor. This is a feature that educators consistently fail to recognize, and is a major reason why machines seem to be out-performing people on “intelligence” since we have slowly but surely come to define learning as a rule-bound function.
Self-organized information in AI means that there are algorithms that specify the order, priority, probability of data such that there is an information hierarchy that is continuously shifting in response to new (and expiring) elements. The popular driving app WAZE is an example of an AI program that can continuously shift priorities according to shifting conditions.
Designing a sensemaking up-hierarchy for people, means creating the conditions for global updating and self-organizing information in human networks.
Global updating means information that flows from local events and out in all directions. Above a certain scale, however, too much information can easily overwhelm individuals in the network. Therefore, we need to design appropriate constraints that “gate-keep” information flows in smart ways.
Self-organizing information in human systems means identifying patterns in the information flows that organize meaning into larger wholes. This is the “up-cycle” in the sensemaking up-hierarchy. In turn, these larger wholes, when redistributed “down” in the network, compose new contexts for local action. This is the “sensemaking” aspect of the sensemaking up-hierarchy. An adequate design will compensate for the need for human systems to gate-away information to avoid overload, because the function of global update is not so much that every node holds every data point (that is true for AI, because it lacks sensemaking ability), but for every agent to be working in an updated context. As complexity increases, contexts continuously shift. Hence, there is no doubt that AI must be a reliable partner in designing and operating sensemaking up-hierarhcies. But this partnership should be able to out-perform strictly rule-bound AI systems, in the context of what matters most to people, and the particularly human challenges we face on our planet today.
Finally, to come full circle. In organizations today, most of the power asymmetry is a function of asymmetrical information processing. This can not be solved merely by giving every person access to every data point in the data base, or every element in the information stream. Rather, this is a matter of information design, and the ability for information to self-organize in larger wholes, and for those wholes to be transmitted as updated contexts for local action.