Photo by NASA on Unsplash

AI will Change the World, but Never Own It

hudbeard

--

Follow Me and Clap for this article if you want to see more content like this. Highlight anything that sticks out to you. Comment your opinion, I want to hear it.

Constantly, we are surrounded by movies and books that foretell the end of humanity at the hands of a “superior” computer. Up until recently, this has stayed an entertaining fictional story. But with AI on the rise, is this a legitimate threat? Will ChatGPT rule the world? Will the point of singularity ever come?

In order to discuss this we must define some terms. One such term is the point of singularity. The point of singularity is defined as the point in time when artificial intelligence exceeds human intelligence. It is generally understood that at this point in time the AI is sentient, and it starts to perfect itself.

The Singularity (source)

It is at singularity that no one can predict what will happen with technology and science. The AI will be able to discover and apply new forms of technology that humans do not understand. All technology is then under the control of AI, because the AI is able to identify vulnerabilities that humans were unaware of. The whole world becomes very chaotic and unpredictable. Here is a great video that goes into more depth if you want more information.

Now that we know what singularity is, and what it can do, is it a possibility? As of right now, no, that is not possible. As for the future, that is a more difficult question. Currently, AI can only learn from information that humans already know and serve to it on a silver platter. I would argue it will stay that way. I understand that humanity can find a new way to use math and AI to build more intelligent models. The technology we have now was previously unimaginable, but AI will always be limited by its training data, which is all human knowledge. Humans aren't perfect.

The only self-aware and rational species on earth is the human race. Humans need something more than their physical brains to function. Something non-physical is required to be rational, sentient, and have emotion and instinct. This is the only know way for sentience to exist. Computers need to possess everything physically. Not to say that technology cannot improve, but in the foreseeable future, sentience in computers is unattainable. Experts are still debating what Artificial General Intelligence (AGI) looks like, but they can generally agree that sentience is a important step towards AGI. Which, for now, is the next step in AI.

Ok, now let’s talk worst case scenario. Let’s say humans did build something more intelligent than themselves, and it has backfired. Is that the end of humanity? No, it is not. Here’s why: Humans have an innate instinct to survive at any and all costs. This is where humans and AI differ. Theoretically, AI will not have this instinct because it only just came into existence. Humans also have the advantage of numbers. If something were to go wrong, we’re just one phone call away from cutting the power. Ultimately we would find a way to survive by destroying the AI we created. Humanity will survive.

Ending on a higher note, this whole thing reminds me of this comic:

Circle of AI Life from MonkeyUser

Follow Me and Clap for this article if you want to see more content like this. Comment your opinion, I want to hear it.

Get an email when I publish a story!

https://medium.com/@hudbeard/subscribe

--

--

hudbeard

Professional software engineer since age 14. Programming since age 6. Python is my love language. 🐍