The World as We Know is Dying — Comparing How We Deal with AI to a Griefing Journey
There is an interesting discussion about whether or not the widespread adoption of Artificial Intelligence will generate demand for new kinds of jobs, but this is not what we should be talking about at all. Since the dawn of times new technologies have demanded new jobs, the carriage demanded the coachmen, computers demanded developers, and this goes on and on… In light of this, why do we continue asking if one thing, in particular, will have the exact same behavior of everything that preceded or followed it?? The bad news is: we’re probably not ready for the answer yet.
At first, I thought about using Gartner’s hype assessment to set the stage for this discussion, but it felt as boring as obvious. So I’ve decided to be a bit unorthodox and use the seven stages of grief as the underlying guide. But why?? Technology evolves in a velocity we cannot keep track, either as society or individual, this often creates a continuous feeling that “the world as we know is dying”. This is not to be mournful, it’s just a simple way to help people grasp a better understanding of what is happening, and foster the discussion on what’s coming. On top of that, it’s worth highlighting this is likely to apply to any technology, but artificial intelligence is something with the potential of “replacing” people, thus anything about it is heightened by the everlasting discussion of how do we embed humanity into a computational system, and about envisioning a sustainable future. In this context, humanity is not about having feelings, but it might be about understanding it or how it affects the outcome of a given scenario… it’s also about ethics, and about what a given society understands from right to wrong.
So let’s do it!!
First and foremost, as grief, people might be in different stages, which is important to foster this kind of discussion, that requires largely different points of view, but it might also create barriers for a healthy debate. To do not extend this a lot, we’ll focus on how each stage express itself when it comes to embracing technology. In the end, we will reflect on how to push forward this discussion, what the actual discussion should be about, and how to move forward.
From the seven stages, this resonates to me as the trickiest one… So I got this definition from Google: “a sudden upsetting or surprising event or experience” so we can start by creating a shared understanding of it. “Event or experience” is not much for debate, I personally see the difference as event being that singular mark that by itself embarks you into a grief journey, whist experience is a sum of events or ongoing situation that culminates towards it (it’s pretty much like being suddenly fired versus ending a relationship that just wasn’t working anymore).
For the sake of this discussion, we’ll focus on the “experience” because here is where it gets trick, “upsetting or surprising” comprises that you have been trough something, processed it, and done an assessment on how it impacts you. In this scenario, the “something” is the awareness of what artificial intelligence is and “its potential”.
Awareness because we’re not naturally wired to deeply understand something before drawing conclusion, we have a survival instinct that propels us to immediately respond to anything that might be a threat. And “its potential” because, as we have limited information from the first time we draw a conclusion, it is often misleading or biased by the perception of those who provided us with that information.
That’s not possible…
The first reaction to “machines will be able to mimic human actions” is either “but they’ll never be able to *whatever*” (which quite often is something the person proud herself of doing), or “but I will no longer be alive when that happens”. In light of this, denial is not restricted to how feasible it is, but often is associated to how soon that future is. The funny thing is that people seem to forget they already solve their credit card issues by naturally talking to a chat bot; or that they find information by asking an entity to search for a large information base and bring back a short selection of relevant information. Both activities not long ago performed by call center assistants and librarians.
The curious thing is that, due to our confirmation bias, this is the moment where we start to search for cues that will reinforce our perception. Whether it’s “computers will never be as creative as people” or “there will be a need to basic income as computers will do all the work”.
It was all for nothing…
Once we stop denying, confronting the possibilities might make arise new feelings we haven’t foreseen, and might even not recognize as our natural selves. Now, imagine this, you are probably at the stage you caught yourself thinking that a machine will (or already is) better at something you’ve been dedicating your life to, and there’s no way you can compete with it. Your natural reaction is to get angry… It probably feels as if you’ve put a lot of effort into nothing.
So here’s the thing: you shouldn’t be going around cursing every Alexa device you see… but it might eventually happen. In case it does, compare your reaction to your normal behavior, if it was an overreaction, that’s a good moment to start reflecting on how does it affect you.
What if …
Bargaining is about trying to get control back, about negotiating with yourself and the universe in order to go back to a scenario you feel comfortable with. Here is where people start to envision how they can adapt, or how technology should be in order to get their “approval”. Although this reflection is a great step towards action, during this stage people still driven by emotions and with limited understanding about the technology, which causes misled assumptions and actions.
The problem is: when it comes to artificial intelligence, and technology as a whole, most people stop at this stage. We have a natural optimism that allows us to believe “things will work out for the best” so we can “wait and see what the future holds”. One of the trade-offs we do with the universe is that we agree to dedicate attention when it becomes part of our lives even if there’s a possibility it’ll be too late.
It is quite common for people to play with the odds until it’s too late. But here’s the problem: everyday will eventually arrive… and in that day you will be suddenly dragged into the next stage…
What if I can not make it?
By the time you stumble upon it, it’ll be too late, because this is how technology works: if you realize it exists but don’t know how it works, you’re probably doomed. Technology is something that underpins our lives, it is built to feel invisible and highly intuitive, but it is inherently complex, it’s not something you’ll understand overnight, and trying to do so by the time you start working with, or is being replaced by, artificial intelligence might drive you into a depressive stage.
Depression on what relates to technology is about being overwhelmed or feeling hopeless. Even if you try to break free from this feeling, it’s the kind of journey where there are too many roads and no maps to guide you.
Friendly advice: you are going through the same thing your parents or grandparents went through the first time they attempted to use a computer or a smartphone… the most important thing is to embrace this as an opportunity for something new, and not as a case of personal failure…allow yourself to embrace what’s new, learn from other people experience and try not to lose your mind in the process.
I can experiment, or at least understand it better…
This is where the self-reflection actually begins… Evaluating which are your strengths, weaknesses and identifying opportunities is the first step to enable you to start experimenting with technology. For some people this is about dedicating time to attend a conference, lecture or meetup; for other people it’s about going deep on the theoretical aspects; there’s also those that prefer to get their hands dirty and go directly into start trying to code, even if it’s something as easy as a conversational interface.
Regardless of how, the important thing is the fact that this is the moment when people start to truly dedicate themselves to understand the technology, and start to envision how they can build a feasible future of human-computer collaboration.
Friendly advise: be honest with yourself. One common cognitive bias we have is relates to our desire to see ourselves as unique (and often as unbeatable). It’s important to understand which qualities of yours stand out, and focus on it. But this is something to be used as a lever, not mistaken by an armour.
If that’s how the future looks like… embrace it!!
There’s not much to say here… the important thing on what relates to acceptance is your approach towards it. Having a negative approach will create an unconscious blocker for new advancements and different points of views. I’m defining a neutral approach as “I’m open to learn and do just as much as I need for my current work/life”. This is not harmful, but won’t get you really far, and this is a growing field where we really need people that are willing to push that extra mile.
In this context, having a positive approach is about be willing to change the game. Engage on the discussion, define new ways of human-machine collaboration, craft the future that better suites our society, or the society we want to live in. It’s not about agreeing or disagreeing, is about having a well argument point of view that is helpful to the discussion, not a gut based feeling with no reasonable arguments backing it.
Moving the needle
Focusing on what really matters
“ Which one is the best economic model”, “should recreational cannabis be legal”, “can we have dessert before dinner” or “is winter better than summer”.
There’s a few dilemmas we face as society to which there’s no actual right or wrong, it’s that continuous pursue of alignment between our personal point of view and how it actually impacts our lives — whether it’s by legislation, common sense or simply agreeing to disagree.
By the time you’re ready to move on and embrace technology advancements, or in this case Artificial Intelligence, you can focus on the discussion that really matters. It’s not about “will there be jobs”, it’s about “what the concept of job will be” or “how large will the marketplace be”. There’s a few fundamental topics you should keep in mind, and even have a personal point of view, just like you do for other society’s dilemmas.
How do we embed “humanity” into a computer is a great dilemma when we discuss the widespread adoption of Artificial Intelligence. Although we still have a long way to go from a technical perspective, we often overlook this is actually about “how do we define what is right on subjects we can not reach consensus on what the right answer is”, than it is “how do we develop an algorithm that properly weights the available variables in order to make a decision”. We probably shouldn’t embed the kind of bias that would make an AI treat one person differently from the other… At the other hand, culture and the specific group behavior is one of the biggest demonstrations of our humanity, how do we deal with this?? And this is quite a fundamental question that underpins any AI application, we could push further and use the controversial example of a self-driving car facing an inevitable accident, how to decide whose life should be prioritized??
Work & society
How much automation is enough, if there’s any limit at all! We probably will never reach a consensus on where is the line which we should not transition from a strictly human, to an artificial intelligence performed activity. It’s important to highlight this is not restricted to work, this goes beyond to any kind of activity, including personal relationships.
On what is related to work, the marketplace will probably gradually adapt, as it has always done, but the discussion on basic income still needed. This still kind of a background topic, to which we haven’t agree on what the right answer is, so I’ll share my personal take on it. A quite common argument on basic income discussion is “in the pace technology is evolving, we’ll soon reach levels of automation that will drastically increase unemployment levels — without basic income people won’t be able to consume, which might crash our economic model while dragging people to poverty”. Although presenting a hard to argument logic, this still as truthful as shortsighted, as only provides a short-term mitigation that does not solve the problem. The discussion of how work will be, if our economic model would still work on a scenario where anything could be produced without interference of humans, will not be an agreed plan which we’ll all gladly execute, but we should know where we’re heading, as it involves a large scale transition of all parts of our society.
Pushing the envelop
Takeaways on how to be prepared to move forward…
Test your conclusion, if you believe “my work will never be replaced by a robot”, conduct a quick research on what is already being done. Medicine already has ground breaking examples that will help you reflect on where human-computer collaboration is heading.
Define your personal point of view, but keep yourself open to adapt it
Be aware of your personal believes and how it works on a technology-driven world. Having a point of view is helpful when being exposed new scenarios,
Be aware of which role you want to play, and be ready for it
I don’t like when I read things that say “lead the change” or “be a champion of…”, I truly believe in personal accountability, but let’s be honest: not everyone is a leader, an earlier-adopter or a front-line warrior… and that’s okay! We are naturally wired to adapt, this is what made us exist for so long, but we should strive to use this as something positive and not as an evolutionary-based excuse to delegate action for the next generations. In light of this, the important thing is being prepared, open and keep in mind the role you’d like to play and what do you need to accomplish it.
Wondering where to go from here?
User-centered frameworks have become key on many different discussions including, but not restricted to, software…medium.com