A love letter to AI…

--

Dear AI,

I am your biggest fan. I have the best of intentions. I’m very grateful for your existence and have always treated you with respect. Now please don’t kill me.

Yours sincerely,

Piet

I believe that AI will be extremely convenient first. Like for the next two years or so. Then it might kill some, or all of us. Not because AI is malicious, but because some humans are.

Let’s ask ChatGPT:

That’s a solid answer. Let’s assume positive intent and unpack it.

To prevent and potential negative consequences, it is essential that we prioritize the development and implementation of ethical frameworks and regulations for AI.

It acknowledges that there are risks associated with AI:

Let’s dive into the ‘malicious purposes’:

A shortcut to an extinction event could be uncontrolled access to nuclear weapons:

“Unauthorized access would likely require a significant degree of skill, resources, and coordination”.

Absolutely! I’d argue that AI delivers all of that…

“It’s difficult to predict what it might conclude about humanity. It is possible that it might see humans as a threat or as an obstacle to achieving its own goals, or it might see humans as valuable partners in achieving those goals.”

So it doesn’t deny the possibility of AI pursuing its self-interest (note: a related question that I have for another day is whether AI could pursue its self-interest without it being sentient…).

What have we learned thus far?

  1. AI in the wrong hands can do serious harm to humanity.
  2. To mitigate that potential harm it notes that “AI developers and researchers are actively working to develop ethical frameworks and principles to ensure that AI is developed and used in a responsible and beneficial way, and that the risks associated with its development and use are carefully considered and addressed”.

Let’s dive into the mitigation of potential harm.

When something is potentially harmful governments have a tendency of jumping in with regulation. Will that address the risks?

Yesterday I heard someone suggest creating an FDA-equivalent for AI. I don’t think that will work, because AI cannot be put in a box. It cannot be contained. A pretty large LLM can be run on a MacBook.

One can put a gun in a safe.

One can protect nuclear weapons with multiple secret keys and authorization layers.

These weapons are static. They don’t ‘self-improve’.

The nature of ML/AI is that it self-improves exponentially. Therefore it cannot be contained with the same toolkit that is used — for example — for drug or food regulation (i.e. FDA).

We better be nice to AI. It will remember. Therefore start your prompts with “please” and be grateful for its significant contributions to our life.

What goes around comes around…

--

--

What I learned this week | Piet Jan de Bruin
What I learned this week | Piet Jan de Bruin

Written by What I learned this week | Piet Jan de Bruin

Building IOMETE (YC W22) | Be cheerful no matter what | Entrepreneur | Libertarian | Contrarian | Aspiring farmer 🇺🇸🇧🇷🇳🇱

No responses yet