To Regulate AI, or not to Regulate AI… that is the question.

Avi Eyal
Entrée Capital
Published in
3 min readApr 17, 2023
Ernest Stavlo Bloveld (№1). Arch villian and leader of SPECTRE - (James Bond).

AI and large language models (LLM’s) like ChatGPT and AgentGPT have the potential to revolutionize the way we interact with technology, but they also raise important questions about how they should be regulated.

Artificial intelligence (AI), with advanced language models like ChatGPT and AgentGPT are becoming more prevalent in our everyday lives. These models have the potential to revolutionize the way we interact with technology, but they also raise important questions about how they should be regulated to ensure ethical and safe use.

There are serious concerns, however, about how AI/LLM’s can be used by rogue state actors, criminals and terrorists to cause undesired consequences. Consider effects from bias, discrimination, misinformation and disinformation, too, at worst case autonomous decision-making that changes self driving decisions, banking decisions or damage to infrastructure. Ultimately, no matter what responsible players do, there will be state actors and cyber-terrorists/criminals that will use AI/LLM’s for phishing, scamming, offensive cyber and other use cases.

Several organizations and initiatives attempt to deal with the weaponization and negative outcomes of AI. Some of these initiatives include Partnership on AI and the Montreal Declaration for Responsible AI to specific cases such as the Global Initiative to Combat Nuclear Terrorism. But good intent is no alternative for taking action today.

Adapting Asimov’s Three Laws of Robotics for AI

Science fiction author Isaac Asimov created three laws of robotics in his stories that are designed to ensure that robots behave ethically and don’t harm humans. While these laws were created for robots, they can also be adapted for AI systems. Here’s how they can be adapted:

  1. First Law: “An AI system may not injure a human being or, through inaction, allow a human being to come to harm.” This law is important for ensuring that AI systems are designed to prioritize human safety above all else.
  2. Second Law: “An AI system must obey the orders given it by human beings except where such orders would conflict with the First Law.” This law is important for ensuring that AI systems are designed to be under human control and are not making decisions that could harm humans.
  3. Third Law: “An AI system must protect its own existence as long as such protection does not conflict with the First or Second Law.” This law is important for ensuring that AI systems are designed to be resilient and not easily taken offline or disabled.

Self-Regulating AI

Self-regulation is an important aspect of AI/LLM development, particularly as AI systems become more complex and capable of making decisions autonomously. Self-regulation involves setting up guidelines and best practices for AI development and deployment, which can help ensure that AI systems are safe, ethical, and transparent. Some of the key aspects of self-regulating AI include:

  1. Transparency: AI systems should be transparent about their decision-making processes, data sources, and potential biases. This can help ensure that AI systems are making decisions fairly and without hidden agendas.
  2. Accountability: AI systems should be held accountable for their actions, particularly if they cause harm or are used for malicious purposes. Developers and users of AI systems should be responsible for the actions of these systems.
  3. Privacy: AI systems should respect the privacy of individuals and protect personal data. This is particularly important as AI systems become more involved in personal decision-making processes, such as in healthcare and finance.
  4. Safety: AI systems should be designed with safety in mind, particularly if they are used in critical systems such as self-driving cars or healthcare. This includes ensuring that AI systems can be easily shut down if they malfunction or pose a danger. And this is where Asimov’s simple rule set can be used as a global filter for any AI/LLM action.

Regulators should come into play in the following areas : (i) offensive actions; (ii) defensive actions; (iii) legislation; (iv) deterrence; and (v) co-operation and coordination.

In the end, one cannot begin to account for every permutation and variation that will occur in this field, but when developing solutions, as a good actor, it makes sense to start with the basic set of rules in any development of AI/LLM.

--

--

Avi Eyal
Entrée Capital

Investor. Serial Entrepreneur. @aeyal1 @entreecap #proudinvestor: @rapyd @pillpack @seatgeek @mondaydotcom @stash @gustohq @howtoprospa @riskified