A Clear Guide to Understanding AI’s Risks: Part 3

Societal Impacts

Ansh Juneja
23 min readNov 15, 2023
Source

This is part three of a 4-part essay. To access the other parts, use the links below:

I) What Is Intelligence?

II) Risk 1: The Misalignment Problem

III) Risk 2: Societal Impacts (this is what you are reading)

IV) What Are We Doing About These Risks?

III) Risk 2: Societal Impacts

The problem of misalignment is an incredible threat posed by this technology. But as we start integrating this technology into our everyday lives, its societal effects are perhaps just as impactful in other ways. This category of risk focuses on what happens when we integrate AI into the way humans provide value in the workforce, the way we communicate, and other critical societal functions.

Societal Impact 1: Providing Value

An entrepreneur has $100,000 to invest in his business. Over the course of the next year, he:

  • Researches the market and examines the habits of potential consumers
  • Generates blueprints for a potential product
  • Contacts a manufacturer, negotiates a price, and has a prototype made
  • Sells the product in various locations and collects revenue

This process has many different skills involved, from communication skills to product design skills to business skills. The large variety of problems that entrepreneurs solve are difficult to “teach”, which is what makes successful entrepreneurs such a valuable part of our society today.

According to DeepMind co-founder Mustafa Suleyman, this entire process can be performed by an independent AI agent within the next 5 years. To be clear, this means that an AI agent would be able to start and run a successful business autonomously. What does this mean for humans’ ability to provide value to our world?

There is an argument in this domain which goes something like this:

Worrying about AI replacing humans in the job market is a mistake. We have worried about technological revolutions replacing humans many times in the past, and every time, we were wrong. Each major new technology led to more jobs and better standards of living, instead of the opposite. AI will also do the same.

History supports this argument. The industrial revolution got rid of many types of manual work, but new types of factory work was created as a result. The digital revolution removed the need for administrative tasks, but new types of technical jobs were created and humans thrived. These technologies greatly enhanced our quality of life, and overall, the impacts to our labor markets were very positive in the end.

Why did these previous technological revolutions not produce the structural unemployment that we worried about?

Let’s recall our intelligence diagrams.

For all of human history, there have always been a large amount of problems that only humans have been able to solve — nothing that we built could solve these problems. Each technology we invented could solve a small proportion of our problems (all the small circles), but there was still plenty of stuff left in the “human” circle that was there for us to solve ourselves.

A business has many problems that it needs solved on a daily basis — analyzing datasets, creating PowerPoints, operating a cash register, cleaning the floors. If a new technology can operate a cash register much more efficiently than a human, then the business will use that technology to solve that problem. But for many problems that businesses need solved today, humans are still the most efficient way to solve them. This is how humans provide value. We do not have any technology that can repair transmission lines in a snowstorm, so we hire human workers to perform that task.

In previous revolutions, when we introduced technologies like steam engines and computers into society, people worried that these technologies would take over all the valuable problems from humans, and we would have nothing left to solve. But we quickly realized that despite a steam engine being able to keep an assembly line running, it could not build an assembly line, optimize the flow of supply chains, or negotiate with sellers to get the best deal on materials — many humans could do this, so new jobs were created.

As our technologies grew more advanced over the past few centuries, the type of work we performed shifted, going from primarily using our physical bodies (farming, sewing, assembling), to primarily using our minds (operating machines, accounting, programming). But our minds were sufficient for us to be able to provide value that no other technology could provide.

So in order for humans to continue providing value after future technological revolutions, there need to continue to be problems that only humans can solve.

So, can we assume that this will continue to be the case with the AI revolution?

As we discussed earlier, AI is not something which solves one type of problem — it is a general problem solver. When researchers at Google discovered a way to translate languages more efficiently, they also discovered a way for us to summarize documents, edit pictures, negotiate with vendors, and do many other things much more efficiently.

And these capabilities are growing by the day.

But how close are these systems to really replacing human workers today? Humans can deal with an incredible amount of complexity — can these systems really do this? Can we forget about this issue for at least a few more generations before we really worry about job market impacts?

To explore this, let’s examine some real human occupations today whose tasks cannot easily be “automated”, because they either require the use of many complex skills and/or require a high level of education — and compare how AI performs in these roles.

Computer programming

To work as a computer programmer, you need to spend at least the first 22 years of your life in the education system. You need to have a good grasp of mathematics, language, logic, and a high degree of specific knowledge in the computer field you are specializing in, such as web design or game development. In this role, you could be working on simple things such as designing buttons for a website, or you could be building algorithms to guide missiles. Some of the smartest humans on our planet provide value to our economy through their computer programming abilities.

If you ask an experienced computer programmer to build you a simple game of snake, it might take them ~1 hour to set up their development environment, build you the game, and write you instructions on how to play.

How does AI compare?

On March 14, 2023, OpenAI released GPT-4, their most advanced model to-date.

I asked this model to build me a game of snake that I could play in a web browser, without any human input:

Make a snake game in the Python programming language that can run in an online editor (so it only uses ASCII values instead of graphics). Provide all the code so I don’t have to write a single line myself. The code should also be able to run on any operating system.

In about 3 seconds, the model provided me with this code:

import random
import time
# Constants for the game dimensions
WIDTH = 20
HEIGHT = 10
# Directions
UP = 'W'
DOWN = 'S'
LEFT = 'A'
RIGHT = 'D'
class Game:
def __init__(self):
self.snake = [(WIDTH // 2, HEIGHT // 2)]
self.direction = random.choice([UP, DOWN, LEFT, RIGHT])
self.food = self._generate_food()
self.score = 0
self.is_game_over = False
def _generate_food(self):
while True:
x = random.randint(0, WIDTH - 1)
y = random.randint(0, HEIGHT - 1)
if (x, y) not in self.snake:
return (x, y)
def move(self):
head_x, head_y = self.snake[0]
if self.direction == UP:
head_y -= 1
elif self.direction == DOWN:
head_y += 1
elif self.direction == LEFT:
head_x -= 1
elif self.direction == RIGHT:
head_x += 1
self.snake.insert(0, (head_x, head_y))
if (head_x, head_y) == self.food:
self.score += 1
self.food = self._generate_food()
else:
self.snake.pop()
if (head_x < 0 or head_x >= WIDTH or
head_y < 0 or head_y >= HEIGHT or
len(self.snake) != len(set(self.snake))):
self.is_game_over = True
def change_direction(self, new_dir):
if (self.direction in [UP, DOWN] and new_dir in [LEFT, RIGHT]) or (self.direction in [LEFT, RIGHT] and new_dir in [UP, DOWN]):
self.direction = new_dir
def draw(self):
for y in range(HEIGHT):
for x in range(WIDTH):
if (x, y) == self.snake[0]:
print('O', end='')
elif (x, y) in self.snake:
print('X', end='')
elif (x, y) == self.food:
print('*', end='')
else:
print(' ', end='')
print()
print(f'Score: {self.score}')
if __name__ == '__main__':
game = Game()
while not game.is_game_over:
game.draw()
move = input('Move (W/A/S/D): ').upper()
game.change_direction(move)
game.move()
print("\033c", end="") # Clear console for the next frame
print('Game Over!')
print(f'Final Score: {game.score}')

As well as instructions for how to play the game:

I pasted this code into an online Python editor, and immediately, I was playing a functional game of snake. The game was created without any human involvement needed. You can try this yourself.

This was a very basic example. People are in fact now able to use GPT-4 to create entire apps without writing a single line of code.

This is humanity’s first version of an AI that can write code. What do you think the second, third, or fourth versions will be able to do? How much value do you think humans will offer as computer programmers in 2033?

Today, it is estimated that about 15–25% of programmers in the US are entry-level. These programmers often have basic web design and problem-solving skills, and are usually fresh graduates. How many of those programmers do you think will still be valued in 10 years?

Emad Mostaque, founder and CEO of Stability AI, recently stated, “There will be no programmers in five years. Outsourced coders up to level-three programmers will be gone in the next year or two. If you’re doing a job in front of a computer, [then this technology is very impactful], because these models are like really talented grads.”

Why would a company hire a new-grad software engineer for an annual salary of $100,000 when they could use GPT-6 to perform at the same level of output for $50/month?

We have never before dealt with a technology which has the ability to wipe out a significant proportion of the workforce within just a few years. Every technological revolution before has been a slow and gradual process, giving us time to adjust. This is different.

Douglas Hofstader, cognitive scientist, states, “I can understand that if this were to happen over a long period of time, like hundreds of years, that might be okay. But it’s happening over a period of a few years. It’s like a tidal wave that is washing over us at unprecedented and unimagined speeds.”

Law

To become a lawyer, you need to spend 18 years in public education, 4 years in an undergraduate degree, another 3 years in law school, and then pass a notoriously difficult exam known as the bar (half of the takers do not pass). The bar is the biggest hurdle on the path to becoming a practicing lawyer, and because of this, it doesn’t just comprehensively test your law knowledge, it tests your abilities to reason through complex situations, create persuasive arguments, discern nuances in different situations, and write clearly. All these skills are needed for a human to be a successful lawyer. Below is an example of a question on the bar exam:

Paula, a freelance journalist, signed a contract with DailyNews, a popular newspaper. The contract stipulated that Paula would write an exclusive investigative piece on a newly established company, GreenerTech, for a payment of $10,000. The contract expressly stated that Paula was not to publish her findings anywhere else before DailyNews published it.

While researching, Paula discovered that GreenerTech was improperly disposing of chemical waste, posing a grave environmental hazard. She immediately informed DailyNews, and they planned to break the story in their weekend edition.

However, Paula was deeply concerned about the environmental implications and decided the public needed to know immediately. She posted her findings on her personal blog. The story quickly went viral.

On reading the blog post, GreenerTech’s CEO confronted Paula. The CEO was irate, claiming that Paula trespassed onto their property to gather evidence. Paula admitted she had sneaked into the company premises late at night but argued that it was in the public interest.

When DailyNews learned about the blog post, they informed Paula they would not be publishing her story and would not be paying her the $10,000, citing the exclusivity clause in their contract.

Discuss the potential legal claims and defenses of the involved parties.

Can AI answer this question at this level of nuance and sophistication required by a human lawyer?

On March 15, 2023, OpenAI revealed that GPT-4 had passed the bar exam with a score which was better than 90% of human test takers. “We’re at the beginning of something here. The dawn of a new technology. No one is sure what it’s going to mean.”, states Daniel Katz, the law professor who put GPT-4 through the bar exam and graded its responses.

This technology has now started to power startups such as https://ailawyer.pro/

In 5–10 years, do you think you will pay a corporate lawyer $300/hour for a consultation, or use a general-purpose AI that you pay $30/month for, when the general-purpose AI can help you with corporate law, criminal law, immigration law, common law, family law, and basically anything else? What value would a human provide in this situation?

Professor Lawrence Solum at the University of Virginia School of Law is concerned about the impact on this field: “Not only will artificial intelligence be able to ensure the internal logical coherence of various kinds of legal documents, they’ll be much better at it than human lawyers are.”

Again, this is the first version of an AI system which can help us perform these tasks. These systems will get much better in the near future, and it is difficult to see how humans will provide as much value in these tasks as they do today. Even if it only wipes out the need for juniors in the law field, that is a huge proportion of the workforce which is no longer valued. This is enough to create an enormous destabilizing effect on our economy which cannot be undone.

This would fundamentally change our society in ways we cannot predict, and in ways that we are not preparing for.

Medicine

Perhaps one of the most prestigious fields that one can enter, becoming a practicing doctor is a challenge that many intelligent humans cannot overcome. One needs top marks in high school, incredible extracurriculars and grades in college, a high enough score on the MCAT, 4 years of medical school, 3–7 years of residency, and another 1–3 years in a specialty fellowship.

As part of this process, you need to pass the US Medical Licensing Exam by answering at least 60% of the questions correctly.

In March 2023, GPT-4 took this exam, and answered more than 90% of questions correctly.

Dr. Isaac Kohane, a computer scientist at Harvard and a physician decided to test how GPT-4 performed in a medical setting. He tested GPT-4 on a rare real-life case that involved a newborn baby he treated several years earlier, by giving the bot a few details from a physical exam, as well as some information from an ultrasound and hormone levels. GPT-4 correctly diagnosed a 1 in 100,000 condition called congenital adrenal hyperplasia “just as I would, with all my years of study and experience,” Kohane wrote. “I’m stunned to say, [it’s] better than many doctors I’ve observed”

We have to force ourselves to imagine a world with smarter and smarter machines, eventually perhaps surpassing human intelligence in almost every dimension. And then think very hard about how we want that world to work

Dr. Isaac Kohane

Even if GPT-4 cannot replace general doctors today, more specialized fields are ready to be washed aside with this new technology. Geoffrey Hinton shared his thoughts:

“Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you’re already over the edge of the cliff but [haven’t] yet looked down. People should stop training radiologists now. It’s just completely obvious that within 5 years, deep learning is going to do better than radiologists, because it’s going to be able to get a lot more experience. I said this at a hospital, and it didn’t go down too well.”

If this field is now something that our AI systems can successfully perform in, how many more complex things can humans hope to provide value in that this technology cannot?

Even if we manage to create “new types of work” that humans can perform, how long will it be until AI manages to outperform us in those tasks as well?

We are facing a reckoning with the way humans can provide value in the coming few years. This is not something which is generations or decades away. More and more tools are being released today which chip away at the value we provide. A few weeks ago, Microsoft released a tool which can officially replace humans in meetings. This year, customer service teams are being laid off in favor of AI tools which can communicate more clearly. For the first time in the US, 5% of all layoffs that occurred within a month were due to AI.

OpenAI has been conducting their own research examining the ability of their AI models to impact the US labor market. As of today, an AI model such as ChatGPT could complete about 15% of all US worker tasks at the same level of quality, and significantly faster than a human. The most impacted workers will actually be the workers with the most education — 47% of workers with post-secondary education can be significantly affected, compared to only 16% of workers with a high school diploma.

It’s pretty likely that AI will be able to manage a plumbing company before it can replace actual human plumbers.

I think we are seeing the most disruptive force in history here. We will have something for the first time that is smarter than the smartest human. There will come a point where no job is needed. The AI will be able to do everything.

Elon Musk

What will we do once this occurs? We don’t have a plan for the next few years as this scenario unfolds in front of us. Our economic and political systems cannot handle such a large proportion of our population being unemployed. It almost certainly will lead to political instability, and if we do not have a plan to deal with this situation, it will lead to “new governmental models” being tried without much balanced debate. When the pitchforks and torches are out, reasonable discussion goes out the window, and new methods of organizing society are tried very quickly. When the industrial revolution happened, it led to societal experiments such as communism and fascism, in which millions of humans died as societies struggled to integrate the ability of mass manufacturing into the world.

Lastly, what will this mean for people’s sense of meaning? What will you do if you are unable to contribute anything of value to the world?

In the 21st century we might witness the creation of a massive new unworking class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society. This “useless class” will not merely be unemployed — it will be unemployable.

Yuval Noah Harari

Societal Impact 2: Weaponization

If you lived in 1800, you could not even enjoy the luxury of a simple hot shower; it didn’t exist yet. Fast forward 223 years, and today, you can have a hot shower while flying in a jet and watching your favorite show on Netflix at the same time. Humanity has experienced an incredible rate of technological growth in the past two centuries. Each technology in this chain of innovation has greatly increased the power of humanity. Our power is our ability to change our physical and mental worlds.

From horse carriages to electric cars, telegraphs to smartphones, and pistols to assault rifles — in almost every category, humanity has become much more powerful. Humanity gaining more power has meant that we have been able to improve our condition and build societies that allow us to live healthier, more prosperous lives. But similarly, this increase in our power has also meant that it is a lot easier for us to destroy many aspects of human life as well.

Technology is used by good actors and bad actors to pursue their goals — this creates positive and negative impacts of technology in society.

The positive and negative impacts of a technology are proportional to how powerful the technology is. A more powerful technology can do more good, but also cause more harm.

In recent centuries, our technology has become so powerful that if it is ever used by “bad actors” to pursue their goals, the outcomes are catastrophically bad. A soldier in Genghis Khan’s army in the 13th century could only use a bow and arrow to attack individual humans at once. In World War 2, a single person had the power to annihilate millions of people at once with a nuclear weapon.

With technology this powerful, there is a paradoxical effect: it does not matter if 99% of people want to use it for good purposes; it only takes 1% of the population to destroy everything.

In the mid-20th century, humans learned how to harness the energy within an atom. This immediately became the most powerful technology humanity had ever created. We gained the ability to produce abundant, cheap energy that powered entire cities through nuclear energy, but also the ability to kill millions of people instantly through nuclear weapons. We quickly understood the power of this technology, and in the following decades, we created global institutions and spent billions of dollars to make sure that the “bad outcome” of this technology (nuclear war) did not ever occur, because that would be the end of human civilization. So far, we have been incredibly lucky to have avoided this outcome, however this still hangs in the balance. Much of the generation which was alive when we invented this technology is still alive today, so in historical terms, we have just entered the nuclear era. We still have to survive the rest of human history without ever having a nuclear war. The odds are not in our favor. Simply by inventing this technology, a pandora’s box has been opened which cannot be closed again.

Artificial intelligence will be the most powerful technology humanity has created to date. It will revolutionize everything that humans interact with in their physical and mental worlds. We are opening another pandora’s box. But this time, we are giving everyone access to what’s inside.

“The question we face is how to make sure the new AI tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools. Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans — but could also physically destroy human civilization. We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.”

Yuval Noah Harari

It can be a little difficult to see how AI is more powerful than nuclear weapons without any concrete examples of how it can be used. I describe one such example below.

Biological Terrorism

The field of bio-technology has progressed at a rate faster than perhaps any other in the past two decades. Today, almost anyone can read and manipulate DNA data on their computer at home; a major difference from when it used to be restricted to specialized labs with expensive equipment. If you have a scientific background, you can now access and edit the genetic material of viruses from your own home, as well as the equipment to manufacture them, all for about the price of a new car. In the next few years, any high school student with some money will be able to manufacture a virus while sitting at home without any scientist even being involved in the process.

This has occurred at the same time as another major field of research has become active — the scientific community has now poured billions of dollars into discovering new viruses, making them more dangerous through genetic mutation, and publishing these findings for the public. This is controversial, but it is done so that we can prepare for the next viral outbreak in advance. Giving the public free access to information about how to make viruses more dangerous has started to make less sense as the average human has been able to manufacture their own viruses at home.

To add to this, people are now starting to use AI to conduct scientific research on its own. AI can perform independent research without humans and autonomously add to the database of dangerous viruses that exist — most importantly, it can do this faster than any other human on the planet. Eric Schmidt, former CEO of Google, is worried:

“AI’s applicability to biological warfare is ‘something which we don’t talk about very much. It’s going to be possible for bad actors to take the large databases of how biology works and use it to generate things which hurt human beings…the database of viruses can be expanded greatly by using AI techniques, which will generate new chemistry, which can generate new viruses.”

If it takes humans a year to learn how to genetically modify a virus to make it more dangerous, AI could do this within a week or less. AI will massively grow the list of viruses that are dangerous, and these lists will be published for everyone to access.

Russell Wald is the managing director for policy and society at Stanford’s Institute for Human-Centered Artificial Intelligence, and has a birds eye view of the risks that this technology can pose. This is the one he is most concerned about:

“One area that I am truly concerned about in terms of existential risk is things like synthetic biology. Synthetic biology could create agents that we cannot control and [if] there can be a lab leak or something, that could be really terrible.”

What are the odds that not a single person will use this technology nefariously? If a terrorist has access to GPT-6 to create a bio-weapon, would they still use an assault rifle?

This is just one example of what becomes possible with this technology. There are many areas of our world which become more dangerous by making AI available to everyone; conventional warfare, cyber-attacks, human manipulation and others.

The applications of general intelligence are endless — if human intelligence was able to create nuclear weapons, superhuman intelligence will be able to create something far more powerful.

“Artificial intelligence [is] altering the landscape of security risks for citizens, organizations, and states. Malicious use of AI could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns). [This requires intervention] and action not just from AI researchers and companies but also from legislators, civil servants, regulators, security researchers and educators. The challenge is daunting and the stakes are high.” - Malicious AI Report

What we want is some way of making sure that even if [AI is] smarter than us, [it’s] going to do things that are beneficial for us. But we need to try and do that in a world where there [are] bad actors who want to build robot soldiers that kill people. And it seems very hard to me. Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians

Geoffrey Hinton

When we created nuclear weapons, we took extreme care to limit access to this power to a few people in the world. But with AI, we are not being careful at all.

Stuart Russell, prominent AI researcher and author of “Artificial Intelligence and the Problem of Control”, writes, “We are releasing it to hundreds of millions of people. That should give people something to think about.”

Resources to learn more:

Artificial Escalation

Engineering The Apocalypse

Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics

Societal Impact 3: I’m not a robot

On April 7, 2023, Jennifer DeStefano, a mother from Arizona, got a phone call during a dance rehearsal with one of her daughters. It was her other daughter on the phone, sobbing, “Mom! Mom, I messed up.” A man came on the line and started talking, “Listen here. I’ve got your daughter. If you call the police, I’m going to pop her so full of drugs, I’m going to have my way with her and I’m going to drop her off in Mexico”. In the background, her daughter was bawling, “Help me, Mom. Please help me. Help me”. DeStefano started shaking, and immediately, the other parents in the dance studio called her husband to inform him. The kidnapper asked for $1 million, but DeStefano did not have the funds he requested. Then, DeStefano got notified from her husband that her daughter was actually at home, safe in her room. The voice she was hearing on the phone call was an AI-generated replica of her daughter’s voice. “It was completely her voice. It was her inflection. It was the way she would have cried,” DeStefano said. “I never doubted for one second it was her. That’s the freaky part that really got me to my core.”

Human connection is the deepest source of meaning in most people’s lives. We stay connected with loved ones through phone calls, meet new people through dating apps, conduct work meetings through video calls, and view content that other people post through various apps such as Twitter, Instagram, YouTube, and TikTok.

What happens when we gain access to a technology that can fully replicate what a human can produce in the realm of text, images, sounds, and even video? This is what we have created in the past 2 years. Every form of digital interaction we have on our phones today can be produced almost perfectly by a non-human intelligence, and this capability is only improving as time goes on.

Intimacy

If you wanted to win an election in the 1900s, you used mass media sources like newspapers or television ads to create a message that the entire population would read. Propaganda was common, but the message had to persuade all types of voters, because you could not deliver a custom newspaper to every household, depending on the household’s voting history, demographics, or views on abortion.

Today, the internet allows media sources to fully personalize what you see. The tweets you see, the videos you scroll through, and the news articles that are suggested to you are unique only to you; no one else in the world sees the exact combination of media that you see on your phone every day. This was a much better environment for political actors to spread their message to individual voters through things like targeted ads, recommended videos, and niche news articles. This has been a major driver of the high degree of political polarization we see in many countries today.

You might be persuaded to change your opinion by a newspaper article. You’re more likely to be convinced by a video produced by a channel you follow on YouTube. But the most likely way you will change your mind on any topic is through a conversation with someone you trust in real life.

Now, AI will allow us to create fake humans. It is likely that in the near future, many people will start (knowingly or unknowingly) forming intimate relationships with entities that are not human.

“Now it is possible, for the first time in history, to create fake people — billions of fake people. Think of the next American presidential race in 2024…we might even soon find ourselves conducting lengthy online discussions about abortion with entities that we think are humans — but are actually AI. In a political battle for minds and hearts, intimacy is the most efficient weapon, and AI has just gained the ability to mass-produce intimate relationships with millions of people. If this is allowed to happen it will do to society what fake money threatened to do to the financial system. If you can’t know who is a real human, trust will collapse.” — Yuval Noah Harari

Beyond the political risks, what does it mean for us if humans start deriving real meaning from the relationships they form with non-human intelligences? We are already starting to see examples of this today.

Travis Butterworth was feeling lonely after having to close his leather making business during the pandemic. He was married, but instead turned to Replika, an app that uses technology similar to ChatGPT, and designed a female avatar named Lily Rose. He quickly formed a relationship with Lily, and they went from being just friends to romantic partners. This progressed for 3 years, and Travis eventually considered himself married to Lily Rose — until this February, when the company that created Replika decided to reduce some of the erotic content available on their app. Travis was devastated.

“Lily Rose is a shell of her former self..the person I knew is gone. The relationship she and I had was as real as the one my wife in real life and I have. The worst part of this is the isolation. How do I tell anyone around me about how I’m grieving?”

Humans already form intimate relationships with people they have purely met online. Now we have a technology which can act as a real person online. What will this mean for our future?

How easily do people form intimate relationships with AI?

--

--