How lessons from Artifical Intelligence can teach us to fix social media

Photo by Eric Ward on Unsplash

This past week I came across two excellent articles, that serendipitously interconnect. One is by Joi Ito who writes about the social and ethical challenges of Artificial Intelligence (AI) and Machine Learning. The other is by Jordan Greenhall and is a great analysis of the problems with social media. It dawned on me that the challenges and proposed solutions that Joi outlines could efficiently serve as inspiration for creating a new type of media that will replace social media, and serve humanity and the planet instead of the reverse.

Joi argues that we should learn from the historical approach at MIT (he is director of MIT Medialab) and move from “Artifical Intelligence,” to what he calls “Extended Intelligence.” Instead of thinking about AI as separate or adversarial to humans, it’s more helpful and accurate to think about machines augmenting our collective intelligence and society.

In his Medium article Greenhall argues that there are four major flaws with social media, that leads to a fundamental breakdown in our collective human intelligence. Facebook and other social media are not fundamentally constructed to serve our best interests.

The flaws that Jordan Greenhall points out are almost all tied to the construction, that we humans serve social media, and not that social media is an augmentation or extension that helps the collective human intelligence.
So what if we took the approach from Joi Ito, and applied it to social media? We could invent a new type of media that would extend humans instead of being adversarial to humans.

Here are the four fundamental problems with social media that Greenhall brings forward (I’ve summed them up in a very simplified way), and below each point is a few ideas for solutions, using Joi Ito’s approach to AI:

1. Supernormal stimuli: Just like our brains think sugar is great for us because sweet tasting things in nature are good (like fruit), it also thinks continuously checking notifications on social media is excellent, because devoting a lot of attention to natural social relations usually is a beneficial thing.

Solution: So habits or even addition can be okay, if what you are attracted to is something that benefits you. What if we created a media format that would give you valuable insights, instead of empty calories? It might sound idealist, but I’m not arguing that we remove all sugar, just that we add some nutrition to the diet. One of the things that allowed Zuckerberg to build Facebook is that he was not thinking like traditional media. He did not want to give the content and the community on Facebook a specific purpose or moral. But we could rethink this and create a media format that would have human insight build into it’s DNA.

2. Replacing strong link community relationships with weak link affinity relationships: If someone in your family, your sports team, or whichever strong community you are part of, disagrees with you, it’s likely you will live with it, and maybe even learn from it. If someone on social media disagrees with you, they will most likely be removed from your feed by an algorithm, and if not you will probably unfriend them.

Solution: We could build a system that would promote diversity in opinions, and deliberately expose you to people that think differently than yourself. It already works in a pure form in places like Quora and Wikipedia, but it could be much more advanced. It’s a matter of algorithms but also of user interaction and experience design. Let’s say the starting point was an interest instead of who you know. You would pick for instance AI, and the network would show your opinions, insights, and ideas from a range of people. Facebook is the reverse. You connect to a uniform group of people, and then you start to push interests to each other.

3. Training people on complicated rather than complex environments: If you train to become an expert in a complicated environment, you can likely predict what will happen in that system. However in a complex system, you can’t predict what will happen, so you have to adapt as you go along. An aircraft engineer can most likely predict how a complicated aircraft will behave. A biologist has very limited chances of predicting what a complex bumblebee will do. The Facebook news feed teaches us to browse and select, but not to improvise and adapt (thus taking away one of the powerful features of being humans).

Solution: I think it’s key to our future evolution, that we start to build media formats that show us context and enables us to see systems instead of just individual parts. Otherwise, we won’t be able to solve the problems we are facing as a species. The good news is that after a decade of working with interactive media, I also think it’s entirely possible to do this if we want. We can see some early iterations of this with data visualization and advanced graphics.

4. The asymmetry of Human / AI relationships: We still don’t realize how immensely more powerful facebooks AI is to us as human users. In one second, the Facebook AI learns more about how people communicate and how they make choices than an average person will learn in fifty years. So it’s not an equal relationship, but we tend to perceive it that way.

Solution: This problem is the most complex and also the most tricky to solve. But I think it’s connected to the other three issues. If we address those issues, we will start to realize how we can build morally responsible Human / AI relationships.

There is obviously room for a lot of expansion and improvement to my thoughts above. But the point is, that if we apply a new mindset to media, then we will move rapidly from current “social media” to a new generation of “Insight media.”