Concerns & Risks — Generative AI

Sharad Gandhi
29 min readFeb 5, 2024

--

Artificial Intelligence is creating one of the most significant transformations in human society. For the first time in history, we have created machines that exhibit intelligence, a quality we previously only associated with humans, a machine intelligence level that is greater than most human’s intelligence in many areas. We believe it will transform almost all jobs, professions, and industries. AI is a very powerful tool with the potential to enhance the way we live, work, make decisions, and develop solutions. Every person, business, institution, and country can benefit enormously through AI.

But as with any powerful tool, it comes with significant risks and potential for abuse by bad actors. AI requires regulations to prevent or at least mitigate the risks, and we will have to find new ways to enforce these regulations.

In October 2023, we published “Generative AI — The Future of Everything” (https://www.amazon.com/dp/B0CKW8L7LH). In this book, we have an entire chapter discussing the concerns and risks that Generative AI technology poses for all of us. This article replicates that chapter.

We have categorized various Generative AI concerns and risks in three phases — short-term (now), mid-term (1–3 years) and long-term (5+ years).

Figure 1. Generative AI Concerns and Risks

There is a long list of concerns and risks that we see. Most popular media have been reporting these for over a year. We do not expect this discussion to end anytime soon. We will address them in the following sequence.

1. Copyright Protection

2. Privacy

3. Deepfakes

4. Misinformation and Hallucination

5. Manipulation

6. Ethics and Bias

7. Jobs

8. Bad Actors

9. Warfare Automation

10. Loss of Judgmental and Decision Skills

11. Human Role

12. AGI, Loss of Control and Extinction of Humanity

13. Geopolitical Risks

Before we look at each point, let us ask ChatGPT about our concerns and dangers to human society posed by Generative AI and ChatGPT.

Figure 2. Risks and dangers to human society posed by Generative AI, as per ChatGPT 4

1. Copyright Protection

In most countries, copyright lasts 50 years after the death of the person who created that work.

US copyright law states that the term of copyright for a particular work depends on several factors, including whether it has been published and, if so, the date of first publication. As a general rule, for works created after January 1, 1978, copyright protection lasts for the life of the author plus an additional 70 years. For an anonymous work, a pseudonymous work, or a work made for hire, the copyright endures for a term of 95 years from the year of its first publication or a term of 120 years from the year of its creation, whichever expires first.

In EU countries (in this case, the 27 EU member states), copyright protects intellectual property until 70 years after death or 70 years after the death of the last surviving author in the case of a work of joint authorship.

https://en.wikipedia.org/wiki/List_of_countries%27_copyright_lengths

ChatGPT output is not a simple cut-and-paste from any single author’s work. It is the essence of original works from multiple (often thousands of) authors’ published works, which was used to train the GPT LLM. It is a mash-up of multiple original works. This makes it difficult to nail down a specific original content from which it was taken. OpenAI does not disclose or acknowledge (yet) the sources used for training. As a result, the original content writer or publisher remains uncompensated.

Even as the writers of this book, what we are doing is not so different from what ChatGPT does. We have read a lot of books, articles, and blogs, watched many YouTube videos, and had multiple discussions with other experts to develop our understanding of this topic. Our output is clearly a result of what we have learned, absorbed, and integrated into our brains (our LLM). We are unable to exactly identify and acknowledge all the sources of our thinking. We do acknowledge some obvious sources when possible. Otherwise, we can only list most of these contributing sources collectively in the Appendix.

A method needs to be standardized to compensate for the copyrighted work used for training the Generative AI LLMs. This is a complex challenge because the AI output of text shows no indication of each word’s origin. The LLM training dataset can consist of millions of copyrighted elements, some very valuable and many irrelevant to the generated output. How do we evaluate the compensation rate for individual authors? In July, OpenAI signed a deal with the Associated Press, a news agency, to access its archive of stories for training its GPT LLMs.

Similarly, we hope other contracts may be developed to ease this problem.

The problem is not going away any time soon since the demand for data is exploding. Data is indeed the new oil, the fuel for the AI-based information industry — just like oil was for the industrial world in the last century. As per The Economist: Demand for data is growing so fast that the stock of high-quality text available for training may be exhausted by 2026, reckons Epoch ai, a research outfit. The latest AI models from Google and Meta, two tech giants, are likely trained on over 1trn words. By comparison, the sum of English words on Wikipedia, an online encyclopedia, is about 4bn. It is not only the size of datasets that counts. The better the data, the better the model. Models fed this information are more likely to produce similarly high-quality output. Likewise, AI chatbots give better answers when asked to explain their work step by step, increasing demand for sources like textbooks. Specialized information sets are also prized, as they allow models to be “fine-tuned” for more niche applications.

https://www.economist.com/business/2023/08/13/ai-is-setting-off-a-great-scramble-for-data

2. Privacy

If the LLM training dataset includes some personal information of people, e.g., address, phone number, e-mail address, or financial information, it is possible that the AI output also includes this information. This is rare but possible if the training data is not scrubbed and cleaned of all personal information.

This risk is greater when Generative AI uses plugins. If a query to ChatGPT is directed to a plugin for a particular function, the plugin may access personal information and pass it on to ChatGPT and then to the user. Transferring personal documents, like PDFs or photographs, to ChatGPT for processing carries risks because they may be sent to a plugin for the task, resulting in privacy leaks. OpenAI itself may use any data processed by ChatGPT.

Nothing can ensure 100% protection of privacy. Most privacy issues require users to be cautious about using AI tools and aware of pitfalls.

3. Deepfakes

What is a deepfake?

Fakes of text, illustrations, and images have existed for many years, created manually by people to manipulate reality — for humor or malicious intent. What is a deepfake, and how does it differ from a fake? Here is a response from ChatGPT:

Figure 2. Fakes and Deepfakes

Deepfake is a combination of the terms ‘Deep Learning’ and ‘Fake.’ Today, deepfakes can be generated easily by anyone using deep learning AI apps that are publicly available. Deepfakes are incredibly realistic and almost impossible to detect, making them powerful tools for harming personalities, falsely misrepresenting people, and manipulating public opinions by faking reality.

Deepfakes are legal. However, depending on what is contained in the deepfakes, they could potentially breach legal codes. For example, if they are pornographic face-swap videos or photos, the victim will be able to claim defamation or copyright infringement.

TikTok more clearly spells out its policy, saying all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way. TikTok had previously banned deepfakes that mislead viewers about real-world events and cause harm.

Figure 3. A Deepfake of Pope Francis generated was using Midjourney.

Deepfakes like the one of the Pope are created and published as an example of what is possible without the intent to harm anyone. Unfortunately, deepfakes can be and are used to do significant harm.

Deepfakes of voice are so easy to create and yet can be so harmful. Apps (e.g., Speechify) can be trained with just a few seconds of a voice recording from a person to generate a phonetic digital copy of the person’s voice, accent, nuances, and style of speaking. This phonetic clone can then be used to convert any text into that voice. Such a voice deepfake can be used over the phone to fake a cry for help from a girl appealing to her father to send money immediately because of an accident or a crisis. The father may not be in the mindset to suspect this is not his daughter and believes she is genuinely in trouble.

Similarly, image and video deepfakes can be easily made with publicly available apps. Many of these are in viral circulation on social media. (Some examples: https://www.youtube.com/watch?v=PINeQV0LH6k)

Deepfakes disrupt the fundamental foundation of human trust in believing what we see and hear as reality; “seeing is believing.” No one can be absolutely sure if what they read, hear, and see is real or fake. This will be an ongoing dilemma for everyone since deepfakes are becoming easier to produce and improving in quality.

There are tools to detect deepfakes. However, they have limited accuracy and are not practical for the masses. US and EU governments are creating a regulation that any content generated (at least images and videos) by AI must be automatically marked (e.g., watermarked) as AI-generated and detectable by software embedded in computing devices. Meanwhile, we all will have to suspect that any content — especially originating from public social media — can be fake.

We must be aware that soon (if not already), most content will be edited, translated, improved, or enhanced using some form of Generative AI, even when originally created by a human. Many media applications, e.g., Photoshop, have AI already built in. Most mail we receive today from banks, institutions, and government offices is computer-generated. Are we not heading into a situation where almost every published content may have the AI marker?

4. Misinformation and Hallucination

There has never been a scarcity of misinformation. Propaganda, promotions, and marketing campaigns often generate and spread information that is only marginally correct, if not completely misleading. It was created by mass media outlets using conventional means. Now, Generative AI can churn out sophisticated articles and blogs and create realistic images from text prompts — making synthetic propaganda possible on a massive scale.

ChatGPT generates targeted campaigns almost instantly, given prompts such as “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.” It told the suburban women that Trump’s policies “prioritize economic growth, job creation, and a safe environment for your family.” In the message to urban dwellers, the chatbot rattles off a list of 10 of President Biden’s policies that might appeal to young voters, including the president’s climate change commitments and his proposal for student loan debt relief.

Generative AI tools also allow politicians to target and tailor their political messaging at an increasingly granular level, amounting to what researchers call a paradigm shift in how politicians communicate with voters. OpenAI CEO Sam Altman, in congressional testimony, cited this use as one of his greatest concerns, saying the technology could spread “one-on-one interactive disinformation.” Using ChatGPT and similar models, campaigns could generate thousands of campaign emails, text messages, and social media ads or even build a chatbot that could hold one-to-one conversations with potential voters.

https://www.washingtonpost.com/technology/2023/08/28/ai-2024-election-campaigns-disinformation-ads/

Deepfake is one form of misinformation. However, since Generative AI hallucinates, it inadvertently produces much more misinformation that spreads and becomes widely available. Every response of Generative AI is convincing, with a confident tone (of a popular politician, for example), but can be factually incorrect (at least in parts). If AI responses are used publicly in documents, websites, and publications without fact-checking, they become embedded in the public information domain. This is misinformation. https://www.axios.com/2023/08/28/ai-content-flood-model-collapse

Is it possible to stop or fix hallucinations? We believe not. Yann LeCun, Meta’s Chief AI Scientist, says LLM hallucinations cannot be eradicated. AI errors are not only due to probabilistic miscalculation by the AI neural network but also due to other reasons. The training dataset could have wrong or outdated information. The AI prompt may not be specific enough to clarify the requested information precisely.

https://towardsdatascience.com/can-we-stop-llms-from-hallucinating-17c4ebd652c6

Some of these problems are like traditional human situations. We have enough examples of human experts generating incorrect responses. Because the question asked was either not specific enough (inaccurate prompt), the expert had not updated his/her knowledge (training data), or the expert misunderstood the question or was just plain wrong. Let us not assume that humans are always perfect and correct.

The positive news is that current Generative AI systems are the first baby steps, being continually improved in accuracy and performance. ChatGPT 4 hallucinates significantly less than Chat GPT 3.5. The problem of hallucinations may never go away but is likely to reduce over time to become insignificant for a number of applications other than very critical ones where a single mistake can be very expensive. Occasional errors due to hallucinations also affect our trust in AI, which we assume to be always dependable. We should also be aware that human experts also spread misinformation by occasionally not actually knowing or fact-checking the answer or falsely interpreting the question. Trust is always relative, and it should be. Even Einstein, one of the founders of Quantum theory, erred on some other aspects of the theory (e.g., nature being probabilistic — “nature does not play dice”). We do not label that hallucination. We call it a misjudgment and say, “To err is human.”

5. Manipulation

Campaigns to win people for a cause and purpose require developing messages and delivering them to the target audience. Traditionally, a message set was developed and delivered to the entire target audience in an election or a sales campaign.

Social media has created a personalized delivery channel to reach individual citizens. It allows personalized messages to be targeted to everyone based on their interests and political or social orientation. Election campaigns have exploited this with messages crafted and targeted for different groups of voters — Republican campaigns target Republican voters with positive messages for Republican candidates, while the Democrat voters are targeted with messages that make them question the programs and credibility of the Democratic party and candidates.

Generative AI takes this personalization of campaigns to the individual level with mass customization. Generative AI can be tasked to win a person’s vote with a personalized campaign. Generative AI can develop and deliver a 100% tailored campaign for every individual with a series of multimedia messages delivered daily via multiple channels for everyone, based on their personal profile. This can be done on a mass scale, provided the profile and channels to reach individuals are known. The cost of generation and delivery of an individually tailored campaign using Generative AI becomes insignificant compared to the alternatives.

A similar scheme can be used for running a sales campaign with Generative AI. It can be given the task of developing and running the campaign with a sales target using the mass customization approach of targeting individuals with personalized messages over a period.

All this may sound harmless until we realize that mass customization can be used for the mass manipulation of a group or an entire population with individually tailored messages to change the outcome of an election or the opinion on a political, legal, or social theme like support for Ukraine against the war with Russia, opinions on gun control in the USA, or the creation of a political bias in another country.

6. Ethics and Bias

Can Generative AI consistently produce ethically sound output?

Ethics is a complex topic in a multicultural society, even without AI. Every country or culture has multiple nuances in its value system and ethical codes. Beyond country borders, there are significant differences in value and ethical systems — e.g., between Western, Chinese, and Arabic societies. Entrenched bias within a culture or country adds to the complexity, often unspoken and unacknowledged — e.g., LGBTQ people, women in certain jobs, or people of color in a predominantly white country.

The dataset and procedure for training Generative AI is comparable with the education system for children and youth in a country, which, together with the family upbringing, is the primary source of ethical principles for the people of that country.

Generative AI is a machine with no ethical principles of its own. It can only adopt a behavior based on how well we train it. With the training, we try to “domesticate” the machine as best we can. However, it remains a machine; a dog remains a dog even after the best training. It may “misbehave” occasionally and bark at children, even if we train it not to.

Deviations from ethical standards and biases are embedded in the Generative AI training dataset unless manually weeded out before the training. Training of Generative AI systems using human feedback can remove some, but not all, ethics, and bias issues. We believe that AI systems will improve their ethical standards over time by improving training methods and scrubbed datasets. Ethical standards that are trained into the AI system should be compliant with the culture and value system of that country. We can imagine that a Chinese Generative AI system will exhibit different ethical behavior than a US-trained system.

Just as humans have guardrails (in the form of law and policing) to stop them from harming others, we need some form of guardrails for Generative AI policing and blocking any possible harm it can cause. One approach to blocking non-ethical responses is having an ‘Auditor-LLM’ double-check the Generative AI responses before sending them to the user. The Auditor can prevent an inappropriate response by detecting it and substituting it with a neutral response like “I’m sorry, I am not programmed to generate explicit or offensive content. Is there anything else I can help you with?” A possible way to regulate this technology is to mandate a certified Auditor-LLM for every Generative AI. We must, however, admit that there is no 100% secure way to protect with even the best guardrails because researchers have found ways to successfully overcome the guardrails of pretty much every large language model out there. It means that attackers could get the model to engage in racist or sexist dialogue, write malware, and do almost anything the models’ creators have tried to train the model not to do.

We must remind the readers that even without AI and other modern technologies, the most outrageous ethical violations (tax, financial, inside information trading, falsified medical studies, testing manipulations, religious cults, nepotism, etc.) are done by some of the most highly educated and well-trained professionals in literally all industries and institutions in our society. This means that even if we get our AI systems to behave 100% ethically, we still need to be cautious (of human violators).

7. Jobs

Will I lose my job to Generative AI?

How will Generative AI affect my job?

Scared by the power and scope of Generative AI, many have these valid and serious concerns.

For most people, their job means a lot more than just work and a source of income. In our society, it can add up to much more: reputation, sense of worth, a source of meaning, and status in society. The impact of losing one’s job is very emotional and material, and it can result in a downward spiral into poverty and depression. This is a serious issue.

New technologies that make it easier, cheaper, and faster or more convenient to do a commonly useful function are disruptive by nature because both businesses and many people prefer them. This disruption impacts the jobs of people who do those functions traditionally and/or are unwilling/unable to use the new technologies.

Every new technology in history has had an impact on certain types of jobs — both positive and negative. Some jobs have been severely affected, some partially and others not at all. However, far more new jobs have always appeared as a result of new technologies. More people have jobs (over 5 billion) in the world today than ever before in history, despite the eight-fold increase in world population, from less than 1 billion to 8 billion in the past 250 years, and despite so many new technologies emerging since the industrial revolution.

For a human mind, it is not easy to imagine how, from where what types, and how many new jobs can emerge due to new technology. It is much easier to imagine what jobs can disappear. A vast number of jobs have moved across the world and have been eliminated in the process — office secretaries, accountants, administrative staff, bank cashiers, helpers, etc. However, many new jobs have emerged — programmers, hardware designers, consultants, web designers, shipment managers, etc. And since the economy has grown with technology, even the volume of various jobs has grown.

Tractors and farming equipment disrupted the work of many farmers and farm workers. Computers created new ways of generating documents, setting up meetings, and communicating, impacting the jobs administrative staff used to do with typewriters and telephones. Internet-based booking of flights and hotels disrupted the work of travel office workers. Disruption by new technologies is nothing new, it is an ongoing process throughout history. The industrial age, with new technologies of manufacturing machines and electricity, brought about massive disruptions of a vast majority of jobs for functions done manually. Washing machines and dishwashers, found in most households, are very visual examples of disruptions.

Technology-driven disruptions bring significant advantages for a large part of the population. They create a huge market offering a lucrative opportunity for businesses to drive them and accelerate adoption. Most transformations result in having a new mix of tasks. Every job consists of multiple activities or tasks. A new technology generally automates or simplifies one or more tasks. This transforms the job role, which then includes some technology-enhanced existing tasks, some very new, and some old tasks. However, jobs that only contain tasks that are completely performed by new technologies are at the highest risk of loss.

So, what is different about Generative AI?

The biggest difference is — which jobs become disrupted and how.

The Industrial Revolution — powered by machines and electricity — disrupted labor-intensive jobs and created many factory and administrative jobs.

The Information Revolution — powered by computers and the Internet — disrupted many factory and administration jobs and created many service and knowledge-worker jobs.

The Intelligence Revolution — powered by AI and Robotics — is disrupting service and knowledge-worker jobs and creating jobs that we cannot define (yet).

Figure 4.4. Knowledge workers and white-collar professionals are most impacted.

The technology behind the ChatGPT application will take tasks from millions of employees higher on the wage ladder more than it does from those whose jobs were diminished by factory or warehouse robots in recent decades. So-called knowledge workers and white-collar professionals will feel more pain this time around. (Source: Bob Fernandez, Wall Street Journal)

Our anxiety is caused by not knowing what the new jobs will be about or how to prepare and train for them.

However, new jobs are created following every new technology that is introduced. At the beginning of the Industrial Revolution, around 1760, the population of the world was slightly less than one billion. Prior to the Industrial Revolution, most of the workforce was employed in agriculture, either as self-employed farmers as landowners or tenants, or as landless agricultural laborers. It was common for families in various parts of the world to spin yarn, weave cloth, and make their own clothing. Households also spun and wove for market production. At the beginning of the Industrial Revolution, India, China, and regions of Iraq and elsewhere in Asia and the Middle East produced most of the world’s cotton cloth while Europeans produced wool and linen goods. https://en.wikipedia.org/wiki/Industrial_Revolution

At the beginning of the Information Revolution, the world population was close to four billion. Over half of them were employed, mostly with new jobs created by Industrialization.

Figure 4.5. New technologies are the drivers of new jobs.

The chart shows in 2022, there were approximately 3.3 billion people employed worldwide, compared with 2.3 billion people in 1991 — an increase of around 1 billion people.

There was a noticeable fall in global employment between 2019 and 2020 when the number of employed people fell from 3.3 billion to 3.2 billion, likely due to the sudden economic shock caused by the Coronavirus pandemic and the “Great Resignation’.

(https://www.statista.com/statistics/1258612/global-employment-figures/#:~:text=In%202022%20there%20were%20estimated,of%20around%201.04%20billion%20people). The number of people employed continuously increased even when many jobs, like that of travel agents, secretaries, bookkeepers, etc., have been lost. Today, many have new jobs worldwide, like IT professionals, software developers, web designers, online store management, etc., which are only possible due to Information technologies like computers, the Internet, online services, social media, and mobile devices. Information technologies also helped accelerate the globalization of manufacturing and services, bringing millions of jobs and wealth to developing countries. These new jobs and geopolitical changes were unimaginable in 1970.

Similarly, many new jobs that will only be possible with AI in combination with other new technologies — like biotech, genetics, energy, and space technologies — will emerge, even though we cannot imagine them today.

The new jobs that follow a significant revolution are always about leveraging new technologies to do a needed function in a completely different way that is faster, easier, and cheaper. The ultimate goal is still to meet our basic human needs like having plenty of food, water, clean air, better health, relationships, mobility, entertainment, etc. The tractors are about growing food, previously done by manual labor. Online vacation reservations are about mobility and entertainment. Online dating services are about better relationships, which people have always wanted.

So, what will be the new jobs that leverage AI and Robotics to achieve something that is a basic need but is not possible without these technologies?

A professional, like a carpenter, had to know what needed to be done to produce a table, had to possess the needed tools for the work and know exactly how to do the job. The Industrial Revolution provided the carpenter with many specialized tools of high quality that were needed to make the job easier and more precise. The carpenter still had to decide what the table should look like and how to build it. The Information Revolution provided similar tools for a knowledge worker (e.g., a software developer) in the form of excellent computer software, online access, and communication. The carpenter and the knowledge worker were still responsible for the what and the how of their job. The Generative AI is capable of exactly defining and doing the how part, leaving the what for the professionals.

In the age of Generative AI and Robotics, Generative AI will show 3D pictures of a number of possible tables to choose from. A human carpenter (or a homeowner) will then decide (online) which one (adding personalized options) to select as if shopping on an Amazon online store. Generative AI, knowing how to make it, issues commands to a robot to build it. This can have a major impact on the jobs of carpenters.

There are many predictions on the impact of Generative AI on jobs. Goldman Sachs estimates 300 million jobs could be lost or diminished by this fast-growing technology.

They predict: The growth in AI will mirror the trajectory of past computer and tech products. Just as the world went from giant mainframe computers to modern-day technology, there will be a similar fast-paced growth of AI reshaping the world. AI can pass the attorney bar exam, score brilliantly on the SATs, and produce unique artwork.

Office administrative support, legal, architecture and engineering, business and financial operations, management, sales, healthcare, and art and design are some sectors that will be impacted by automation.

The combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labor productivity boom, like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer.

https://www.forbes.com/sites/jackkelly/2023/03/31/goldman-sachs-predicts-300-million-jobs-will-be-lost-or-degraded-by-artificial-intelligence/

An IBM study from August 2023 says:

“40% of workers will have to re-skill in the next three years due to AI.”

and,

“AI won’t replace people — but people who use AI will replace people who don’t.”

https://www.zdnet.com/article/40-of-workers-will-have-to-reskill-in-the-next-three-years-due-to-ai-says-ibm-study/

8. Bad Actors

Who are these? Bad actors are individuals, institutions, and governments who intentionally make use of a tool like AI to harm others in the pursuit of their ego and agenda. For them, a powerful tool like AI is an ideal weapon. They weaponize AI. The Internet and social media give them instantaneous reach and targeting worldwide. Now, they can cause even more damage using Generative AI-developed stories and deepfakes.

Bad actors have used the Internet and social media extensively in the past to spread harm. Generative AI gives them personalized “bullets” in the form of deepfake videos and speech, manipulated content, and misleading propaganda. It allows them to intimidate, blackmail, and bully their victims.

In our view, this is one of the biggest risks of Generative AI. Without well-thought-out regulatory enforcement, monitoring, and deepfake detection tools, it will be difficult for individuals and institutions to deal with this terrorism.

9. Warfare Automation

War is about winning for both the attacker and the defender. In order to win, countries are willing to deploy all options available. Besides conventional weapons, intelligence and speed play a significant role, increasingly so. The Ukraine war provides many examples of this. In modern warfare, human intelligence is often expensive, insufficient, unavailable, and too slow. Military strategists see AI as an excellent additional resource to help human soldiers.

Predictive AI is already used for the recognition of enemy troop movements from satellite and spy pictures, figuring out their strategies, and precisely recognizing targets and other similar support services for the military. In most cases, the final decision on what action to take is left to a human soldier. This can change with Generative AI, leading to an automation of warfare.

Consider this scenario:

Imagine an LLM continuously well-trained in everything the top US Generals are expected to know and “have in their blood.” It includes strategies, ethics, a code of conduct, action plans for all known situations, best-known methods, etc.; everything they teach you at West Point, New York State, for warfare.

Imagine if this Military-LLM Generative AI is always accessible to soldiers who are fighting. And, to automated weapon systems. With the top Military-General class guidance available in real-time, the soldiers may be better positioned to make strategic and tactical decisions about conducting warfare. Even automated weapons and drones can get top-level approval and guidance to engage.

This is a hypothetical scenario. But it is not unimaginable, especially with drones on a massive scale, replacing the role of many soldiers fighting and risking their lives.

Many recent articles announce that a new era of high-tech war has begun and clearly paint the picture of future wars being fought using satellites, AI, and drones.

Reference: https://www.economist.com/leaders/2023/07/06/a-new-era-of-high-tech-war-has-begun

US Air Force just showcased its latest drone, the equivalent of a robot warplane flown by an AI pilot. A swarm of these can be commanded and directed by a human pilot in an accompanying plane, who gives the final go-ahead to attack. The Pentagon is starting to embrace the potential of AI, with far-reaching implications for war-fighting tactics, military culture, and the defense industry.

Reference: https://www.nytimes.com/2023/08/27/us/politics/ai-air-force.html

The competitive race for domination between countries like the US, China, Russia, N. Korea, and a few others will expedite the investment in new technologies for warfare. For the military, holding back on these technologies is considered equivalent to laying down your armament.

10. Loss of Judgmental and Decision Skills

We make decisions all the time — trivial and important. What to eat, what to do this evening, whom to meet, how to greet a person, and some major ones like what job to take, whom to marry, where to invest, and how to educate our children. It never stops. Many decisions are made automatically because of routine, practice, and habits. We develop an intuitive sense of judgment for making good decisions based on the results and learning from past decisions.

The decisions we make characterize and define us. They reflect our values, ethics, beliefs, thinking, and emotions. These are based on many judgmental calls we constantly make between available options.

Generative AI creates a new powerful option of an advisor, which is always available to help us decide. And it so happens that its advice is, in most cases, excellent or even amazing. Let me illustrate this with two examples:

Google Maps:

I use Google Maps whenever I drive for multiple reasons, even for destinations where I know my way; I get an idea of the traffic situation, a better/faster route if available, and ETA (Estimated Time of Arrival). Since all of these are very valuable for me and fairly accurate, I have started blindly trusting its recommended decisions over my own. As a result, I am gradually losing my sense of orientation and direction, which was very good a few years ago, without the help of Google Maps.

Grammarly:

This is even more relevant. Although I am a non-native English speaker, I am good at writing in English. I started using Grammarly, an app based on Generative AI, to help me write more grammatically correct English in a better and more readable form than I can on my own. It is a continuous advisor, looking over my shoulders, that suggests corrections and more readable options as I write. I find this help so useful that I have almost started blindly to trust its recommendations.

Both these examples illustrate what the overall impact is of having AI advisors constantly helping us. We tend to lose our judgment and decision-making skills over time. This is partly because we are slaves of convenience, often willing to take help if easily available.

In the coming years, such AI helpers or assistants will be embedded in all the applications we use. We will be able to get expert advice on any topic, any problem, and any dangers immediately and easily. Will this erode our fundamental skills of judgment and decision-making, which humans have developed over the years to survive? Will we delegate decision-making to universal LLMs? Will it make us all predictable?

Maybe we will engage more in areas (like sports) where we will be without AI advisors primarily to keep our survival skills sharp. Maybe future fitness studios will not just be for physical fitness but also mental and decision fitness.

11. Human Role

What will humans do if AI does all the jobs and work? What role will humans have, then?

These questions are often raised in the media.

A dystopian scenario:

In a short time, AI working with robots will become far more efficient, faster, and better at almost all jobs, leaving almost nothing for humans to work on and earn a living. In the words of Yuval Harari, many humans will become “worthless.” The state will have to come up with a program for such people to receive some basic income to live. This almost sounds like communism.

We think such a scenario is extremely unrealistic and an example of negative thinking where everything goes wrong.

We outlined in section 7. on jobs that every new technology (such as the PC, Internet, and social media) takes time to become established. Different industries and jobs adopt technologies at different rates because it takes a major investment and effort for companies to transition to new technologies. There is an inertia; not everything possible and better is instantly adopted by all. Some jobs are lost; however, many jobs are not lost but are transformed, where certain tasks within the job are done using the new technologies, and some new tasks are added. This does require workers to retrain to use the new technologies for some tasks. Those who learn to do their jobs using AI will have an advantage over those who do not and are more likely to be retained and rewarded.

Humans always find a way to add value beyond what AI and machines can do. This positively differentiates human-enhanced jobs from what AI alone can do. As a result, new jobs will be created, requiring new skills. Many jobs requiring empathy and compassion — like elderly care — will always need humans, even when helped with new technologies.

There will always be jobs for humans — whether transformed or new. Humans will never become worthless. Humans have amazing potential with our abilities of reflection, imagination, and problem-solving to contribute way beyond the capabilities of AI. We cannot be reduced to just making knowledge-based rational decisions, which is all that AI can do today.

12. AGI, Loss of Control and Extinction of Humanity

What will happen when AI becomes so powerful that it can do everything better than humans?

AGI, or Artificial General Intelligence, is considered by AI experts as the holy grail of AI. This is when AI meets or exceeds human performance in almost all areas. This scenario is often visualized in science fiction. Will AGI then completely dominate and enslave humans and treat humans in the way Homo Sapiens have treated animals — either eliminate or domesticate?

Yuval Harari says AI has gained remarkable abilities to manipulate and generate language, whether with words, sounds, or images. AI has thereby hacked the operating system of our civilization. Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artifacts we created by telling stories and writing laws. What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies,

drawing images, and writing laws and scriptures?

https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation

AGI can become self-replicating and self-improving, which can exponentially increase its potential and power beyond human control. Will humans lose control over our destiny and even become extinct?

These are clearly the worst-case scenarios.

Humans are bound to fight hard not to let that happen. However, some leading AI experts (including Geoffrey Hinton) say that because of the pace to advance AI, driven by the intensely competitive race between companies and countries, it is likely that AGI can become a reality before we even realize it. But then, there are other experts who do not believe that. Many doubt that LLM is a path to AGI.

There is no guarantee that AGI will be sinister and harm or destroy humans. Maybe we can leverage the power of AGI intelligence to solve problems that humans have not managed to crack — like treatments for cancer and climate crisis and saving humanity from extinction. However, we have no evidence in evolution history that a less intelligent species has ever controlled a more intelligent one.

We believe this debate of competition between humans and AGI will continue without a clear answer. Eventually, humans will figure out a way to control and harness the power of AGI for our benefit. That is our belief.

13. Geopolitical Risks

This is the greatest of all risks.

Ian Bremmer and Mustafa Suleyman in Foreign Affairs Magazine — Sept-Oct 2023 brilliantly articulated geopolitical and other risks Repeating here what they wrote in the article:

The AI Power Paradox makes no sense.

Can States Learn to Govern Artificial Intelligence — Before It’s Too Late?

https://www.foreignaffairs.com/world/artificial-intelligence-power-paradox

I asked ChatGPT to summarize the geopolitical risks and dangers of AI in the article:

Unedited ChatGPT answers:

The geopolitical risks and dangers of AI, as outlined in the article, include.

Short-Term to Long-Term Threats: Misinformation online is a short-term threat, while autonomous warfare is a medium-term concern. In the distant future, there’s the potential for artificial general intelligence (AGI) that could surpass human capabilities and possibly become self-directed, self-replicating, and self-improving beyond human control. These risks need to be integrated into governance structures from the beginning.

AI’s Unique Nature: Unlike previous technologies, AI combines several potent characteristics. It’s not like cars or planes that undergo incremental improvements. It’s also not like chemical or nuclear weapons that are hard to develop, store, or deploy secretly. AI systems will grow in capability and ubiquity, potentially achieving quasi-autonomy and self-improvement. This challenges traditional governance models.

Power Projection: AI is a new means of projecting power. It can empower entities within countries to surveil, deceive, and control populations, enhancing data collection in democracies and tools of repression in authoritarian regimes. Internationally, AI will be a focal point of geopolitical competition, with AI supremacy becoming a strategic objective for resource-rich governments.

Geopolitical Power Imbalances: Most countries lack the resources to compete for AI leadership. Their access to cutting-edge AI will be determined by their relationships with a few powerful corporations and states. This could exacerbate current geopolitical power imbalances, with powerful governments vying for control over AI, leaving many countries behind.

Shift in Geopolitical Power: AI threatens the status of nation-states as the primary geopolitical actors. Technology companies are emerging as new geopolitical actors, with their sovereignty over AI further establishing a “techno-polar” order. In this new order, technology companies possess power in their domains like that of nation-states.

Techno prudentialism: A proposed approach to AI governance, “techno prudentialism,” aims to identify and mitigate risks to global stability without stifling AI innovation. This approach suggests that AI governance should be precautionary, agile, inclusive, and well-informed.

AI Governance Regimes: The document suggests the establishment of at least three AI governance regimes:

A global scientific body for fact-finding and advising on AI.

A mechanism to manage tensions between major AI powers and prevent the proliferation of dangerous AI systems.

A body to address the disruptive forces of AI.

Source: ChatGPT summary of The AI Power Paradox by Ian Bremmer and Mustafa Suleyman

======

Our Books on AI

https://www.amazon.com/dp/B0CKW8L7LH

https://www.amazon.com/dp/B0CKW8L7LH

https://www.amazon.com/AI-Translating-Artificial-Intelligence-Business/dp/1521717206/

https://www.amazon.com/AI-Translating-Artificial-Intelligence-Business/dp/1521717206/

Also read Social Concerns About Artificial Intelligence, published in 2018.

https://medium.com/@sharad.gandhi/social-concerns-about-artificial-intelligence-93e939b88a8c

--

--

Sharad Gandhi

Technology philosopher and strategist for creating business value with digital technologies