What is the problem with social media?

Jordan Hall
Deep Code
13 min readMar 23, 2018

--

It appears that at long last the popular sentiment seems to be boiling with enough intensity to (maybe) do something about the problem of social media.

Good. It is high time.

But, even if it turns out that we have reached the point where we are ready to do something, we come face to face with the reasonably difficult question of what we should do. And, in order to know what we should do, we must first have a good understand of what is, in fact, the matter.

I have spent a great deal of time over the past several decades endeavoring to boil down the many different issues we are facing in our contemporary environment to identify the deepest sources of trouble. My proposition is that if we can address these problems in a systematic and well tailored fashion, we will have properly responded to the challenge (and have a decent chance of handing our children a world we can be proud of). On the other hand, if we fail to go to the very root and merely treat the symptoms, I believe that we (and they) will be unhappy with the results.

In the case of social media, I would like to bring your attention to four foundational problems. There are likely more, but each of these is real, deep, and as far as I can tell, not yet fully understood.

In brief, these four are:

  1. Supernormal stimuli;
  2. Replacing strong link community relationships with weak link affinity relationships;
  3. Training people on complicated rather than complex environments; and
  4. The asymmetry of Human / AI relationships

In this essay, I will endeavor to articulate each of these four issues. This is not lightweight stuff and while I will do my best to make the points as clear and compact as I can, I expect that this will prove a bit challenging. Medium tells me that this is a twelve minute read, perhaps this is the kind of essay where you grab a fresh cup of coffee and settle in . . .

Problem 1: Supernormal stimuli

The human organism is an evolved homeostatic system roughly adapted to a particular set of environmental characteristics. While humans are remarkably flexible and capable of both individual and group learning, we are nonetheless sitting on a mammalian, primate, homo- substrate that is relatively hard-wired in its response to its environment.

Supernormal stimuli (also called hypernormal stimuli) are inputs that hijack an animal’s instincts beyond their evolutionary purpose. Evolution is remarkably willing to make do with “just enough” and as a consequence, our evolved systems are vulnerable to stimuli that overwhelm our evolutionary heuristics. In an important way, at a biological level, we really can’t tell what is good for us.

For example, up until about thirty thousand years ago, equating “sweetness” with “healthy” was a useful error. It worked. If you equated sweetness with “good, healthy, nutritious, desirable”, then you picked that nice sweet fruit, ate it, survived and passed on your genes. But as human beings began to take over from raw nature and more and more of our lived environment was a human constructed environment, the gap in this error between what you really need and what your sensemaker is tuned to make you seek became an exploit.

It turns out, it is possible to refine the sensation of sweetness away from the context that associated it with nutrition and have the signal without the thing that it is supposed to deliver. Cotton candy is sweet, but not nutritious. Indeed, this is more than possible, it can be incredibly profitable — a spoon full of sugar makes everything go down.

This deliberate use of supernormal stimuli is a kind of black magic because it gets in behind your conscious sensemaker to lead you into all sorts of bad (self destructive, fitness diminishing) behaviours. And as “seeking sweetness” is a part of our evolved hardware layer, it is the sort of thing that is very hard for individuals to overcome.

We humans have become masters of supernormal stimuli. Our ability to give ourselves what we want has far outstripped our ability to sense what we really need. And in the accelerating win/lose game theoretic arms race that has characterized late 20th Century society, the use and abuse of supernormal stimuli has become an almost requisite tool in the product marketing toolkit.

In every possible market niche, we see an arms race for attention and choice making (purchasing). And in each case, a ruthless (and reckless) use of increasingly sophisticated understandings of human physio-emotional and psycho-cognitive systems (and their supernormal vulnerabilities) is part of any viable competitive strategy. Anyone who fails to take advantage of supernormal stimuli is selected against, and the general drift of the entire market is towards increasing disruption of our evolved homeostatic systems.

It is important to note that, at a social level (i.e. using “collective intelligence”), we have shown some capacity to develop defenses to these supernormal vulnerabilities (e.g., the emergence of social movements regulating sugar, nicotine, etc.). However, these social defenses tend to move relatively slowly and to be unevenly distributed. Moreover, the general rule that decentralized market-based mechanisms outcompete top down regulatory mechanisms seems to be in play here.

Now we come to supernormal stimuli in the context of social media. Here we see the gamification and hijacking of both the evolved systems for “attention allocation” (what we pay attention to) and for “social relationship.” Notifications (particularly bings and buzzes on our phone), likes, hearts, simple and explicit “friending,” even just the extraordinary pace and vastness of the news feed itself — all of these are supernormal stimuli that play havoc with our homeostatic systems (e.g., neurotransmitter feedback loops) and the adaptive capacities that rely on them (e.g., forming and maintaining real relationships, thinking about reality).

Importantly, while the application of supernormal stimuli to our eating choices or mating choices is certainly destructive, the use of supernormal stimuli in social media is particularly risky. This is because supernormal stimuli in social media directly undermine our capacity for individual and collective intelligence. In this context, the hijacking of our evolved functions presents the potential of disrupting our social capacity to respond to the problem itself. Not good.

Problem 2: Replacing strong link community with weak link affinity

The second primary risk associated with social media is that they serve to change the conditions under which we form and maintain human relationships in a fashion that leads to a meaningful reduction in the number and type of “strong-community” connections and a substantial shift of time and attention towards “weak-affinity” connections.

In a natural environment, the primary selection criteria for relationship formation is physical proximity. Simply put, you can only form relationships with people who are within relatively easy travel distance. Therefore this is who you form relationships with. Notably, while local communities will naturally tend to form shared sensibilities, the simple fact of diversity of experience and perspective will lead to significant heterogeneity of both ideas and values within any physical community. Even siblings in a family will naturally develop substantially heterogenous sensibilities and experiences.

In this natural environment, the exigencies of community require that all participants develop adequate personal and interpersonal skillfulness to navigate this heterogeneity: regardless of how much you might disagree with your uncle, if both of you are required to maintain the success of the hunt, you will learn how to get along.

When you combine high skillfulness at getting along with a lot of time in relationship with heterogenous perspectives, you get the kinds of “strong” links out of which we can fabricate real community.

By contrast, social media enables an entirely new kind of human relationship: the “weak affinity” bond. In the social media space, it is trivial to (a) find people who very closely share your own perspectives and preferences and to (b) avoid people who do not (up to and including simply “blocking” them from your perception with the click of a mouse).

These kinds of bonds are the “cotton candy” of relationship. On the one hand they are easy and pleasant. On the other hand, they build little of enduring value. In the context of “attention exploiting media” where there is a premium placed on getting as many eyeballs as possible — this new potential for weak affinity becomes an operational mandate. A social platform that lacks the ability to filter or block unpleasant participants will quickly be outcompeted by one that has that capacity.

As adaptive creatures (particularly developmentally during childhood and adolescence), we cannot help but respond and adapt to the signals of our physical and social environment. Weak affinity environments reward and punish behaviours very differently from strong community environments.

Thus, as we spend more and more time in virtual social spaces and (by necessity) less and less time in physical social spaces, we observe the continual movement of virtual social space towards asymptotically superficial echo chambers and the participants in these echo chambers trained for skills like emotional fragility, virtue signaling, conformity policing, and / or neo-sociopathy. These are not the ingredients of an enduring society.

Problem 3: Training people on complicated rather than complex environments

The deep problem here has to do with a distinction between “complicated” environments and “complex” environments and how participation in these kinds of environments trains for very different adaptive capacities.

A rich examination of complexity and complication is outside of the scope of this document, but in brief the distinction is that a complicated system is defined by a finite and bounded (unchanging) set of possible dynamic states, while a complex system is defined by an infinite and unbounded (growing, evolving) set of possible dynamic states.

Thus, for example, a Boeing 777, while very complicated, is ultimately a bounded system. Given enough information about the Boeing 777, we can predict with precision how it will respond to given inputs.

By contrast, a humble bumble bee is intrinsically complex. In principle, while we might be able to get a good sense of how it will respond to given inputs, it is always possible that the system itself (the bee) will simply change and, therefore, no matter how much information we have, our ability to predict is always limited.

Note that is always possible to see a complicated system as complex by putting it into relationship with a complex system. Thus, if a Boeing 777 is struck by a bird while in flight or is flown into a mountain, these effects will lead to the destruction of the Boeing 777 as a complicated (predictable) system and its reconnection with unbounded complexity. The fact that complexity is the base case of the natural world and that complication is always a temporary and artificial condition is of singular importance. [For those who want to learn more about these concepts, I have created a short video here.]

In practice, this distinction shows up in two very different adaptive responses when one has an eye towards making good choices. In the case of complication, the optimal choice is to become an “expert”. That is, to grasp the whole of the system such that one can make precise predictions about how it will respond to inputs.

In the case of complexity, the optimal choice goes in a very different direction: to become responsive. Because complex systems change, and by definition change unexpectedly, the only “best” approach is to seek to maximize your agentic capacity in general. In complication, one specializes. In complexity, one becomes more generally capable.

In this context, we can say that a fundamental issue of something like the Facebook News Feed is that it is training our sense making systems to navigate a complicated space in a complicated manner (“browse and select”). And, because our attention is limited, the more time we spend training in this condition, the less time we spend training our sense making systems to explore an open complex space.

Moreover, we witness the same dynamic on the other side of the UI. If and when I encounter something (say a post on my News Feed) that motivates me to some action, the only actions that are available to me within the FB UI are:

  1. To select one of six emoticons to “like” the post;
  2. To comment on the post;
  3. To share the post;
  4. To write a post of my own (which will be separated from the original post by the News Feed algorithm in the attention stream of the FB audience).

Again, the deep problem here lies less in the specific actions that are possible within the Facebook UI, but in the basic fact of presenting an environment of radically simple (or complicated) choices rather than complex ones. Of replacing choice with selection.

In a truly complex environment, we are always empowered (and indeed often required) to generate novel (creative) actions in response to perceived circumstances. In other words, our field of choice is unbounded and, therefore, symmetric to the unbounded field potential of the complex system in which we are living. We are thus challenged to and trained to improve our responsive capacity to complex circumstances.

In a complicated environment, we are ultimately engaging in the very different mode of simply selecting the “right” or “best” action from a finite list. This is an optimization game, and while it can be extremely useful when competing in finite complicated environments (e.g., Chess) it is a capacity that is oblique to creative response. Therefore, again, the basic problem is that meaningful (and widespread) participation in this kind of platform is training our agency away from capacities that are truly adaptive and towards a narrow specialization for particular complicated games.

[Moreover, we can notice that even we we select the relatively complex problem of commenting or writing our own post, the overall environment of Facebook serves to narrow even this choice into a relatively complicated game. The pace of change in Facebook and the almost complete erasure of what had presence even a moment ago, constrains “success” to that subset of expressions that satisfy the dual conditions of (a) will grab attention and (b) will drive actions of the sort that are perceived and upregulated by the attention mediating algorithm.]

Problem 4: The asymmetry of Human / AI relationships

The final problem at the root of social media is a bit more challenging to grasp. Perhaps because it is the most novel in our collective experience: the intrinsic asymmmetry between human and artificial intelligence.

Gary Kasparov is a much better chess player than you are. In 1996, the AI chess program Deep Blue beat him. Shortly thereafter chess AI became effectively unbeatable.

Ke Ji is the human champion at the much more complex game Go. In 2017, AlphaGo beat him. His evaluation of the match:

“Last year, it was still quite humanlike when it played,” Ke said. “But this year, it became like a god of Go.”

Later in 2017, a new version of the AI named “AlphaGo Zero” took it a step further. In three days it taught itself to go from knowing nothing about Go to beating the version of Alpha Go that had bested Ke Ji — 100 games to 0. If Alpha Go was a god of Go, what in the world might we make of AlphaGo Zero?

One thing is for sure: it is very much not human. We are rapidly moving into the Era of AI and we are going to have to get used to this fact and to its deep implications.

When we enter into relationship with an entity like Facebook (or Google, or Apple, or . . .) we still have the basic expectation that we are entering into a vaguely symmetric, human, relationship. At worst, we unconsciously expect the sort of unpleasant bureaucratic relationship that we enter into with Walmart, IBM or General Motors.

Nothing could be further from the truth. No matter how devious you might imagine the suits of the corporate world being, they are still, ultimately, just human. These social media AI? When it comes to grabbing and holding our attention or to analyzing and profiling our data, the algorithms of social media stand in relationship to our human sensibilities as Alpha Go Zero to an average Go player. They are like gods. And gods that, for now at least, don’t have our best interests in mind.

Imagine if your spouse, your therapist and your priest all entered into a conspiracy with a team of world class con men to control and shape your behaviour. Sound a bit unsettling? Well consider what the Facebook algorithms alone know about you. Every conversation you have — even those that you type out but don’t send — are perceived by the Facebook AI, and then analyzed by technology designed by thousands of researchers schooled in the very cutting edge of psychology and cognitive neuroscience.

Every conversation you have — and every conversation the other 1.4 billion people on the platform have. In one second, the Facebook AI learns more about how people communicate and how they make choices as a result of their communication than an average person will learn in fifty years.

The Facebook AI is Alpha Go. The equivalent of Alpha Go Zero is a few minutes in the future.

We need to get our heads around the fact that this kind of relationship, a relationship between humans and AI, is simply novel in our experience and that we cannot rely on any of our instincts, habits, traditions or laws to effectively navigate this new kind of relationship. At a minimum, we need to find a way to be absolutely dead certain that in every interaction, these gods of social media have our individual best interests in mind.

So there you go. Four fundamental problems. These problems are not limited to social media, of course. Supernormal stimuli show up in our cell phones and video games. Training on complication is a major problem with our educational system. And, of course, in the next few decades, AI is going to show up everywhere.

But today we are face to face with social media and now is as good a time as any to begin trying to figure out how we might go about comprehensively addressing these major challenges of the modern era. Of course, we are already suffering from a major breakdown in our collective intelligence. It seems challenging for us to think clearly about anything at all — much less something as nuanced and tricky as this. Well, it seems highly unlikely that things are going to get better all by themselves. So I hope that we are able to muster the clarity and conviction to take responsibility for this world that we have built — while there is still time to do so.

--

--

Jordan Hall
Deep Code

Changed my name back to Hall, sorry for the confusion. Also, if you are interested, my video channel: https://www.youtube.com/channel/UCMzT-mdCqoyEv_-YZVtE7MQ