Designing Societal Resilience in the Era of AI

Spyros Michailidis
12 min readJun 5, 2023

--

Image of a futuristic city generated by OpenAI’s DALL-E2.

Even so, mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity. This will have serious mental, emotional and sociological consequences, and I dare say that psychiatry will be far and away the most important medical specialty in 2014.
The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.

Isaak Asimov on the effect of automation during his visit to the World’s Fair of 2014, The New York Times, August 1964

Progress in A.I. and automation is cataclysmic and, despite the promise of delivering abundant wealth and endless productivity, it comes with equally severe perils.

This article will attempt to touch upon the potential loss of purpose of folks who may find that their jobs have been replaced by AI. We argue that it is imperative to start tackling this problem with urgency, and we propose a way to do so.

Reports predict hundreds of millions of jobs to be lost[1] while it is equally certain that wages will be put under pressure as companies will have more options employing hybrid human/AI-powered task forces[2].

The resulting new situation in the job market could potentially undermine societal coherence and the very existence of western democracies.

Sense of Purpose

Everyday routine is important even if many feel they want to change it. Well before the age of retirement, people should prepare for the transition, planning how to fill the free time that their retirement will bring.

Importantly, for the lower income segments of western societies, retirement can come with very little options, bringing stress and loneliness.

On the other hand, if one loses their job well before their retirement age, the imposed ‘free time’ can potentially bring more severe psychological repercussions with growing insecurity, anxiety, and desperationg.

The problem of unemployment, however, may become irrelevant in the coming decades as the emerging new industrial revolution will drastically transform our production model.

Asking ChatGPT what the most vulnerable jobs are, here’s its response:

  • Routine manual jobs, such as assembly line workers and drivers.
  • Routine cognitive jobs, such as data entry and telemarketing.
  • Jobs in the service industry, such as retail sales and food service.
  • Jobs in the financial industry, such as bank tellers and loan officers.

My personal view is that more — if not all — industries will be affected. Over time, employment may become available to only few segments of the workforce, including some labor-intensive jobs, caregiving professionals and highly specialized scientists (the likes of the AI-supported researchers working for the next cutting-edge technological breakthroughs).

Even coders, one of the most sought-after professions until very recently, will be impacted as AI-driven tools will soon be able to receive natural-language-expressed requirements, and build entire software systems, using the language and development framework that the prompt indicates[3]. Like a double-edged knife, this technology will democratize the art of programming, empowering everyone to build software products, while at the same time it will render non-expert coders obsolete.

ChatGPT says that new jobs will be created due to progress in AI and automation, including:

  • Jobs in the field of data science and analysis.
  • Jobs related to the development and maintenance of AI and automation systems.
  • Jobs in healthcare and eldercare, due to an aging population and the need for human care and interaction.
  • Jobs in green technologies, such as renewable energy and sustainable agriculture.

The first two categories will probably represent a small percentage of jobs while the fourth would happen anyway. Jobs in healthcare and eldercare will surely gain importance, representing a sector which has always been underserved.

Knowing how important it is to fill one’s day with a constructive routine, modern societies will apply solutions such as the Universal Basic Income[4], but is this going to compensate for the lack of purpose that unemployment will bring? Yes, people may have food on the table and will probably have a place to stay. But will the UBI-secured standard of living provide what one would call a fulfilling life, offering good options to spend one’s time in a purposeful way? Or is a new class system going to emerge, separating those who contribute to financial growth from those who don’t.

Its creators may want to believe that the metaverse can be a solution to this societal problem, offering a life of virtual abundance where users can spend most of their time, enjoying the things they’d have wished for virtually via their avatars. But would they willingly give up the things that physical contact provides and humans have valued for centuries?

It is certain that innovative solutions will emerge in the coming years, as the problem begins to take shape. But the timing and scalability of the proposed solutions will be critical as one may miss lurking changes in pockets of society while falsely believing that it will be possible to address the problem at any time.

Here’s, again, what societal changes ChatGPT had to propose to deal with the loss of jobs because of the progress in AI:

  1. Upskilling and retraining programs for workers displaced by AI and automation.
  2. Universal basic income or other safety net programs.
  3. Investment in alternative industries to create new job opportunities.
  4. Redefining the concept of work to include non-traditional forms of employment.
  5. Re-evaluation of the tax system to support the transition.

Although these suggestions could contribute to smoothing out the severity of the impact, I would argue that they’d fall short addressing the problem of a fundamental shift in how people perceive their role in society.

It could take one or two generations of social change and, potentially, unrest, to decipher and let that shift sink, while realizing that our educational systems need to be drastically refocused. This is going to be a long journey of enforced transformation but the impact on our societies is imminent and will be felt in just few years, even less than a single decade.

Redefining a Disrupted Society

In several of their interviews, Sam Altman and other OpenAI scientists have repeatedly pointed out how worried they are about the societal impact of their work, and how everyone needs to be aware and prepare for what’s coming[5]. Especially governments which need to regulate the use of AI before it’s too late.

Most recently Geoffrey Hinton[6], Eliezer Yudkowsky[7], and Elon Musk, among others, call for a pause for six months or more[8] to reflect and prepare, raising the alertness level.

But I don’t think that stopping is the right way because even if OpenAI is genuinely concerned, and supposing that Microsoft accepts to pause for a while (an unlikely event), others working in parallel, will continue silently, aiming to bridge the gap from their leading rival, OpenAI. Even though this would not be fair for OpenAI, which happens to be the frontrunner in the spotlight, it would solve nothing as the genie is out of the bottle and nobody can put it back in.

Suppose there is an agreement in the western world that the likes of OpenAI, Google, Meta, Amazon, Apple, Palantir and so forth, pause developments and shift their efforts to helping governments to draw a regulatory framework over AI. Will Chinese companies such as Baidu, Tencent and Alibaba, follow this pause? The most probable answer is that they won’t. In which case pausing has no effect other than giving rivals time advantage. This seems so obvious that makes one wonder about the true motivation of the call for a pause, considering that Elon Musk is building his own AI company[9].

What can be done, however, is to work on regulating how AI is used, who has access to the code and who can use it to do what. Like nuclear power, it was impossible to not discover and harness it, but it could be possible to regulate who has access and how it can be put into use.

Humans have by default both the good and bad in them. I recall Google’s past moto of ‘not being evil’. This is now largely forgotten and, tbh, it was never relevant or true because there’s no single human-driven entity in this world, that can be either good or bad. If we want to be realistic, we need to accept that these two sides coexist and it’s a matter of putting regulatory frameworks to let the good side proliferate over the bad.

Allowing more collective control over anything is a good guide. Any small group or entities which acquire a lot of power in their hands, is a bad idea. It is manifested in monarchies, non-democratic regimes, and market monopolies or cartels. They all primarily work for keeping their ground, cementing their position, and crashing their rivals, no matter the consequences and casualties. I don’t think there’s a need to give examples here.

But distributing power and decision-making over many more, making it as collective as possible, seems the right answer as humanity will collectively want to survive and not self-destruct.

Humans can be misguided but this can happen if they are not informed and aware of the real facts and conditions. Take the example of a potential nuclear disaster. Hallucinating dictators like Vladimir Putin or Kim Jong Un could threaten the world with nuclear weapons. Feeling extremely threatened themselves, they could even press the launch button of destruction with their obedient entourage afraid to cancel the apocalyptic order.

Can you even consider such a disastrous decision as an option if it had not been in the hands of a single human and, instead, required the collective approval of a council consisting of several people — experts in nuclear power, humanitarian disasters, the environment, and ethics? If you’re thinking that this would make this decision impossible, you’re right.

But such decisions should not be possible in the first place because no one should be allowed to use nuclear weapons. Nuclear technology should only be allowed for peaceful purposes. But the superpowers which owned it, never wanted to regulate it properly because they would lose the advantage.

AI is like nuclear power in many ways. It’s a game for the very few and powerful and it gives an important competitive edge to those leading the race. But it’s clear that humanity needs to handle it differently than nuclear technology.

In all likelihood, as AI is applied across business sectors and industries, we’re going to see major disruptions in our productive models, impacting employment, and the fabric of our society in several ways. Importantly, it’s going to be hard to assess the psychological impact of this disruption on people who no longer have a daily routine, associated with a regular job.

We should be expecting several different approaches aiming to deal with this problem but it’s important to remember that viability and scale are paramount. For example, a plan that can work in Scandinavian countries will not necessarily be transferrable to other parts of the world, so whatever is proposed, needs to be tried and adapted to become effective at scale.

One thing seems clear: Initiatives and collaborative efforts are needed, from all sectors of society, to propose and implement novel solutions that prioritize human needs.

Our Approach

How do we deal with what’s coming? A small technical team in Athens, proposes and is building an online environment that will be open to everyone and will have the following key features:

Inclusive networking: in a society which favors segregation and supports mostly one-way communication and information consumption, lacking interactivity and feedback mechanisms, we believe that we need to respond with an online environment which will allow users to connect and interact with each other, in ways that are constructive, empowering, and energizing, encouraging them to work and contribute towards a common goal. We want — by design — to filter out the toxicity that current social networks allow and encourage trustworthy relationships and collaboration.

Purposeful and mission-driven: Our aim is to enable individuals to play an active role in shaping a more connected and sustainable society that values social cohesion and environmental preservation. To make this vision more concrete, we must determine specific metrics for measuring contributions. Our initial focus will be on supporting localized sustainable development, such as promoting tourism and local production in underdeveloped areas, as well as fostering solidarity and social support among members facing hardship.

Reward-driven: everyone needs incentives to do anything. The goals we set for ourselves, what drives our behavior and everything we do, is associated with desired effects. Choosing to do something over something else, is usually associated with a reward. Of course rewards aren’t necessarily positive but positivity is what we’re going to be fostering. Incentivizing people to work on things that positively impact themselves and their peer groups is very important and may be the key success factor of the project. Incentives can be associated with self-fulfillment, empowerment, participation, and contribution, in addition to acquiring income.

Governance: in a re-shaped AI-driven society, we should embrace the evolution of our governance systems. The foundations of our society remain those of the United Nations and the European Union. But representative democracies have shown their limitations and can be improved, allowing citizens to participate more frequently and directly in the decision-making process (adapting e.g. a model like Switzerland’s). We can clearly see how technologies such as blockchain-supported DAOs[10] can help in this direction, off-loading principal rules to an entity that cannot be tampered with, allowing stakeholders to vote on local topics.

Putting Things into Practice

We have been developing a system which applies these design principles, anticipating the need, which is now gaining urgency and importance.

It hasn’t been an easy task because of the ambition and scope of the endeavor, the associated technological challenges, and the difficulty to secure funding, as the topic didn’t seem very attractive to (the admittedly very few) investors that we have contacted.

As the driver is not profit, it may be that this type of project needs the support of those who are going to be directly affected, namely the community itself.

As a result, we are exploring crowdfunding options, employing blockchain technology, which perfectly suits our goals of incentivization and empowerment of the weak, even though our platform can only be partly decentralized for purely practical reasons.

Although the decentralized playbook dictates to abolish everything centralized, one needs to be realistic and admit that not everything can be implemented in a decentralized way (at least not yet). Even the Internet Computer[11] which is our decentralized platform of choice, cannot currently support the implementation of a full-fledged recommendation engine, to name just one of the required features. However, our loyalty and rewarding scheme and governance delegation to the community, are excellent candidates for decentralization.

Call to Action

This article can serve as our manifesto. Although lacking in detail, we roughly explained what we’re trying to do and would like to hear what you have to say.

We believe the why is obvious. We see a new set of emerging problems that can undermine the fabric of our (already imperfect) societies. The positive side is that new, better societal structures can emerge out of this hardship, more humane, more empathetic, and fairer. And we want to contribute proactively in avoiding the alternative dystopic scenarios.

We envisage a more altruistic model where we transition to allocating more time to solving the problems of others (including sustainability and the preservation of the environment) than focusing on our own small bubbles, ignoring that our neighbor’s house is on fire. But everyone knows that kicking the can down the road won’t solve anything and reality will come and hit us all if we don’t act. This initiative is our way of acting.

We argue that there’s no way to deal with this type of emerging problem without collaboration and grassroots participation. The new model needs to offer incentives, both old-style in the form of improving one’s living conditions, and ethical-style knowing that what we do serves a purpose: contributing to a better collective life for ourselves and the generations to come.

We think it’s time to open our initiative to the world, listen, and importantly, explore how we can engage and (in)form a community of frontrunner likeminded folks who can contribute to putting this into practice faster, spread the word and make it work despite the odds.

Although we have not explored all available options for funding (yet) we are open to the world and welcome the exchange of ideas, and expressions of interest from volunteers who share our vision and would like to work with us.

We’ll soon be updating this and you with channels where you can participate at this stage.

References

[1] AI could replace equivalent of 300 million jobs — report (BBC) https://www.bbc.com/news/technology-65102150

[2] How AI like ChatGPT could change the future of work, education and our minds (SF Cronicle) https://www.sfchronicle.com/tech/article/ai-chatgpt-education-work-17846358.php?sf176671153=1

[3] How to get Codex to produce the code you want! https://microsoft.github.io/prompt-engineering/

[4] https://www.investopedia.com/terms/b/basic-income.asp

[5] How OpenAI’s CEO Balances AI Development and Risk | Tech News Briefing | WSJ https://www.youtube.com/watch?v=riNhPkcAoPw

[6] https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning

[7] Pausing AI Developments Isn’t Enough. We Need to Shut it All Down | by Eliezer Yudkowsky https://www.lesswrong.com/posts/oM9pEezyCb4dCsuKq/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all-1

[8] Pause Giant AI Experiments: An Open Letter https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[9] https://www.forbes.com/sites/mattnovak/2023/04/14/elon-musk-forms-new-ai-company-in-nevada-called-xai/

[10] https://www.investopedia.com/tech/what-dao/

[11] https://internetcomputer.org/

--

--