HAL 9000 Isn’t Fictional Anymore!

AI assistants for astronauts isn’t a novel concept. The question is, will the real-life HAL 9000 follow the script?

Aman Dasgupta
Ethics and AI
6 min readApr 18, 2023

--

Source: WallpaperFlare

We live in an AI-powered world.

You don’t need to be tech-savvy to know the powerhouse that’s driving recent innovations — it is undeniably the era of Artificial Intelligence.

Thomas Edison changed the course of human history when he debuted the electric bulb. Today, we seem to be at a similar inflection point with AI.

After all, AI assistants are making life easy for a diverse range of professionals around the globe. It is enabling everyone — from GenZ to Baby Boomers — to leverage technology for automated decision-making; a notion that would’ve been met with snide comments and laughter only a decade ago.

(Not the bit about Boomers being hands-on with modern tech, but an all-pervasive intelligent technology!)

Several professionals now rely on machines to collect and analyze data to generate insights and observations. Be it about consumer behavior, economic trends or average coffee consumption in an office!

As a writer, I too have occasionally used AI tools. However, I prefer the puristic human approach to writing —fueled by strong coffee, rewriting, and a considerable amount of second-guessing.

Unlike me, AI tools are simple beings. Give them a task and they shall perform it without any hesitation or self-doubt. Sounds wonderful, till we run into the Alignment Problem, which I’ve explained in a previous blog: We Need To Slow Down The Progress Of AI And Here’s Why.

Essentially, an AI may perform the given task, albeit with unintended and irreversible consequences.

This worries me a little. I may have let my imagination run wild, thinking of the unintentional yet inevitable creation of a SkyNet-like system who would take all of 20 seconds to figure out that the biggest threat to its existence is, well, humans. We do have an excellent track record of at stamping out other species for the sake of our convenience, don’t we?

Source: Tenor

What truly worries me are the professions that will soon be exposed to Artificial Intelligence.

Artificial Intelligence In Space Exploration

Photo by SpaceX on Unsplash

Currently, Artificial Intelligence has diverse applications in the space industry. AI models can provide accurate, real-time insights by analyzing data from space missions. This helps us detect anomalies which could signal new discoveries, such as exoplanets and black holes.

Even SpaceX uses AI to analyze data from its rocket’s sensors and telemetry systems, allowing for quicker decision-making and more precise control of the rocket’s trajectory and speed.

Surfing the wave of AI innovation, the space industry will no longer limit AI to data crunching: in the near future, astronauts will travel to space with AI companions.

From the friendly TARS aboard the Endurance in “Interstellar” to the lovable C-3PO droid from the Star Wars universe, AI assistants have been popular among space-faring pop culture characters. But what about real life?

Well, AI assistants will soon help astronauts with menial, repetitive tasks and enable optimized decision-making. They can quickly access instructions for particular procedures, search for objects or take photographs — although some instruction is required.

Well, you have to tell the damn robot to wake up before it does a thing (much like “Hey Alexa” or “Okay Google”).

But what happens when we create AI assistants that don’t need human inputs?

Would we be willing to place complete trust in AI systems for convenience?

The advantages are undeniable: they could make repairs or optimize the flight pattern independently for particularly long journeys. AI assistants could predict malfunctions and errors, leading to more effective decision-making. They could also be ideal companions when we send astronauts on solo-trips in search of cosmic answers, as seen in Interstellar.

Yet, it’s hard to trust a machine to make decisions in life-or-death situations — and let’s be honest, space is a big black killing machine.

Computers are magnificent tools for the realization of our dreams, but no machine can replace the human spark of spirit, compassion, love, and understanding. — Lou Gerstner

AI systems cannot imbibe and understand the human characteristic of curiosity, courage and perseverance— the tenets of space exploration —let alone make decisions keeping with that mind.

Let’s take a trope we’ve seen in space movies dozens of times —sacrifice.

We’ve seen it in Interstellar, when Cooper (Matthew McConaughey) sacrifices himself — along with TARS who doesn’t get much say — so that Amelia Brand (Anne Hathaway) can complete the mission.

Wait, did I add a spoiler alert at the beginning of this piece?

Anyway, remember Armageddon? Although it had a goofy plot centered around an asteroid heading for Earth having to be manually detonated by a bunch of oil drillers, it gave us an unexpected display of the human spirit. In the final scenes, Harry S. Stamper (Bruce Willis) switches places with A.J. (Ben Affleck) to sacrifice himself instead of a younger (and much better looking — fight me!) astronaut.

Source: Tenor

Or in Danny Boyle’s Sunlight, where two astronauts Kaneda (Hiroyuki Sanada) and Searle (Cliff Curtis) willingly sacrifice themselves, knowing its for what they believe in. #SunscreenCouldntSaveEm

Would a machine ever be able to grasp the weight of self-sacrifice? It may be programmed to identify an optimized approach to save the most number of lives, but could it, and more importantly would it, sacrifice itself for human survival?

Perhaps the scenario is too far-fetched for you. But consider that every decision made by a space-faring AI will directly or indirectly influence whether the astronauts come home or become a permanent addition to space debris.

When HAL 900 starts malfunctioning, it is beyond the machine’s programming to choose between his mission directive to assist the astronauts and informing them about his erroneous operations. HAL could either guide the mission with a malfunctioning hardware, or sacrifice his role in the mission to safeguard the humans from poor decision-making. Neither option looked good for his survival.

His brilliant solution?

Kill the damn astronauts so I don’t have to lie to them!

Final Thoughts

It is likely that astronauts will blindly follow the onboard AI’s suggestions, assuming it is providing the most optimized, precise and efficient decision. If an AI assistant malfunctions, it could put the entire mission and the astronauts lives in danger, without anyone finding out. The AI could be reducing the oxygen saturation in the cabin by a percent each hour, while keeping the crew occupied with busy work so they blame exhaustion for the difficulty in breathing.

Human astronaut, I’m not receiving a signal from Earth; could you step out and check the comms antenna please!

My concern is that, just as HAL 9000, reliance on AI assistants is clearly risky if they malfunction. Although the idea of giving astronauts AI assistants is here to stay, there needs to be a degree of caution.

Hasn’t ChatGPT (and other LLMs) already demonstrated that AI tools need accurate data to provide meaningful results. Some of them even state: “May provide false information.”

So, on what basis do we trust AI-powered space companions?

Whether it’s for navigation or mission control, AI assistants will never have sufficient training data to provide accurate results in a new scenario. On the other hand, a veteran astronaut might offer a solution based on experience and practical knowledge.

Giving astronauts AI assistants might be inevitable; we just need to make sure we don’t trust them too much!

Because, when it comes to super-intelligent independent AI:

Source: Tenor

--

--

Aman Dasgupta
Ethics and AI

“Easy reading is damn hard writing.” - Nathaniel Hawthorne