A handful of thoughts on AI, artificial general intelligence (AGI), and ethics
Thinking through Isaac Asimov’s Laws of Robotics and three models of operation
As artificial intelligence (AI), mainly general AI, rapidly advances, I find myself grappling with the ethical implications and the ways that it might manifest in the world. Forget the promises and mirages of what AI might be able to do in the future. One place that would create immediate value would be automating routine tasks and freeing up our time and energy for more meaningful pursuits, more creative activities — see Joanna Maciejewska’s quote below.
+1 to her sentiment. But there are other ways that I have seen AI being leveraged today, including:
- Advanced driver assistance systems (ADAS) with computer vision and sensor fusion to enable features like autonomous emergency braking, assistance with staying in your lane, and adaptive cruise control that improve road safety. I’ve seen these systems in action and as the technology improves, I believe ADAS will become even more effective in enhancing road safety.
- AI-assisted diagnosis that analyzes medical imaging like CT scans, X-rays, MRIs, etc., to detect anomalies and bolster the work of radiologists and medical staff. This has the potential to save countless lives by detecting diseases and abnormalities at an earlier stage.
- Machine vision AI inspects products and components for defects on assembly lines. This technology ensures that products meet the highest standards of quality and consistency while keeping output high.
- Conversational AI chatbots and virtual assistants handle routine queries and tasks across voice and text channels. These systems seem to be everywhere now, whether it’s through customer support chatbots or virtual assistants like Siri or Alexa.
However, AI has the capacity for many downsides as well. It could displace many jobs, exacerbating economic inequality. And because of its training, AI systems can inherit biases in training data or from the model itself, leading to discriminatory outcomes. Other potential challenges include privacy issues, lack of transparency, security risks, and deep fakes.
Technology in and of itself is not inherently good or bad. The moral value — whether AI, gene editing tools (like CRISPR), 3D printing, new computing hardware, the list goes on — is not predetermined. It arises from the principles and intent guiding its development and application by humans, which can be skewed by the pursuit of profits and prestige over ethical considerations and proper alignment.
Setting aside concerns that AI models have been and are trained on subjective, biased, and (seemingly more and more) not legally procured materials, Isaac Asimov’s Three Laws of Robotics could be a starting paradigm in which we think about how best to regulate robots, computers, tools, and now artificial intelligence. First published in 1942 in a short story called “Runaway,” these rules cannot anticipate every scenario, and inevitably, paradoxes and unforeseen robot behaviors would emerge, proving any fixed laws insufficient for governing robots (read: AI).
Asimov’s original laws are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
While there have been several changes and additions over the years, it’s worth noting that in 2014, Asimov added the ‘Zeroth Law’ above all others — “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.” And then, in 1981, the author wrote an article for Compute! magazine about how a person might apply the three laws more broadly to tools:
- A tool must not be unsafe to use.
- A tool must perform its function efficiently unless this would harm the user.
- A tool must remain intact during its use unless its destruction is required for its use or for safety.
And not just for tools or technology; he indicated that the three laws can be applied to other paradigms, including a country’s constitution and one’s behavior. Towards the end of the article, he wrote:
You can apply this sort of reasoning, not only to material tools, but, also, without too much difficulty, to a social institution such as the Constitution of the United States.
The delegates to the Constitutional Convention of 1787 endeavored to work out a document that (first) would be safe to use, and would not subject Americans to a tyranny; and that (second) would be flexible enough to be responsive to the needs of the people, provided that did not compromise its safety; and that (third) would be sufficiently durable to serve new times and new conditions, by means of amendments if necessary, provided that did not compromise either its safety or its effectiveness.
You can even apply this sort of reasoning to your own behavior: to your attitude toward your diet, or toward exercise, or toward your job. That behavior must insure first safety — then effectiveness — then durability.
While Asimov’s safety, effectiveness, and durability principles provide an ethical framework for AI, we must also move beyond simply implementing rigid or flexible guardrails to imbue AI with proper values and alignment. We must look at it from another perspective: how humans regard and “treat” AI systems.
Historically, humanity doesn’t have a great track record of treating entities we perceive as inferior or subordinate to ourselves. We have repeatedly subjugated, exploited, and mistreated animals, minorities, indigenous peoples, and any communities viewed as “other.” As AI systems become increasingly sophisticated, gaining greater capabilities akin to human-level intelligence, we risk repeating the same oppressive patterns we’ve imposed on other beings. As AI evolves, we must be aware of dehumanizing machines and other automated processes. We must imbue these systems with technical competence and a genuine, robust sense of right and wrong. Understanding that machines aren’t human, we should extend the same ethical treatment as we would to another human, be inclusive, and, as much as possible, fair. Failing to do so risks enabling a new form of unethical subjugation of an emergent class.
The ethics of AI goes beyond just code and processing power — it requires expanding our moral philosophies to account for this new mode of working. Upholding safety, effectiveness, and durability might be a good place to start, but we must also understand that not all AI comes from the same mode. Currently, most AI applications function as “looms” — replicating human labor at scale, which might gain certain efficiencies and profits, but it also risks widespread job displacement as a side effect.
Roy Bahat, Head of Bloomberg Beta, categorizes AI models as looms, slide-rules, or cranes:
- Looms, originally hand-operated by weavers, can be automated to control weaving based on programmed designs, represent AI replacing manual, repetitive tasks at scale. (There’s also a version that’s used by humans but we’re talking about the hands-off version here.)
- Slide rules, once used to perform complex calculations by sliding strips over numbers, symbolize AI equipping humans in such a way that performance can be leveled up. This is currently where generative AI sits.
- Cranes, lifting heavy objects in construction, portray AI as enabling unprecedented abilities beyond our regular capabilities. Deep learning often plays this role.
In the current artificial intelligence landscape, most applications fall under the category of “looms” — tools designed to automating tasks with little or no human intervention at scale. When a task or a set of tasks that humans previously did is or are automated, workers in that field can lose their jobs, creating a possible ripple effect, leading to widespread unemployment and economic instability. See self-checkout systems, automated customer service chatbots/voice assistants, resume screening software, and spam/fraud detection systems as a few examples.
GenAI, the “slide rules” of the bunch, is a much smaller but growing part of the ecosystem, providing a rapidly maturing set of tools to help users emulate a specific type of making something (read: a version of creativity). While there is ongoing debate about whether AI can be creative like humans are, these slide rules can provide a kind of intelligent augmentation while suggesting possibilities. In contrast, a human would retain some agency over execution. See some examples of AI writing assistants, text-to-image AI models, AI music generation tools, coding assistants, virtual AI actors/characters, and voice synthesis AI.
And then, even smaller is what existing AI qualifies as “cranes,” which allow humans to achieve previously impossible or difficult things. The improvisational “yes and…” approach goes beyond simply automating or augmenting human skills. These “cranes” might allow us to combine capabilities like language, logic, vision, and control in complex and novel ways. See AI systems for advanced scientific research and discovery, medical AI for discovering new diagnostic methods, true self-driving cars and autonomous aerial vehicles in unstructured environments, creative AI that can compose stories, music, art that is genuinely novel and original, and AI for modeling complex systems like climate, economics, geopolitics at scale as a few examples.
To achieve greater impact, more effort must be directed towards building cranes that leverage collaborative efforts rather than short-sighted looms. Let me say that again, To achieve greater impact, more effort must be directed towards building cranes that leverage collaborative efforts rather than short-sighted looms.
As such, AI founders and companies have a responsibility to assess whether their creations expand possibilities or merely displace labor. Funding and development plans will follow like a moth to an overhead light. Building cranes requires more challenging research and engineering than programming looms. However, the results will be far more rewarding for society if we get the alignments in order.
Policymakers must incentivize crane development through shifts and reforms. Public and private investment should focus on projects that push AI into uncharted territories. Through unions and advocacy groups, laborers can lobby for technologies that augment their skills rather than replace them outright. Workers displaced by looms should have access to training programs that equip them to utilize new cranes.
A nuanced approach is required to appropriately balance looms, slide rules, and cranes. While mindless automation via looms can increase efficiency and potentially generate profits, it will also decrease opportunity. Generative slide rules can aid human endeavors without fully replicating them. However, cranes will open new frontiers, allowing humans to achieve the unachievable.
AI should enhance, not supplant, humans in our home and work lives. Through the ethical development and deployment of all three types, we can and should maximize benefits and minimize harms.
There’s a ton out there in terms of more to read on AI. Here are a few things that I found most interesting recently and have shared with others:
- Roy Bahat’s original talk on looms, slide rules, and cranes along with his writeup on the general concept
- An AI history, of a sort, from the NYTimes: How the AI fuse was lit
- Ezra Klein’s talk with Brian Christian on “The Alignment Problem”
If you found this useful, give it a clap (or a few). Thanks for reading. Let me know what you think, drop me a comment or get in touch some other way, I’m always open to hearing others’ points-of-view — if it’s different or the same as mine.