The Three Laws of Mindful Personalization

Margot Kimura
Yoyo Labs Blog
Published in
8 min readJul 27, 2018
Image by Gerhard Janson

Origin: The Three Laws of Robotics

In 1942, Isaac Asimov introduced the “Three Laws of Robotics”, a set of absolute rules that every¹ Artificial General Intelligence (AGI)² robot-servant in his science fiction stories must obey. In short, the rules are (1) Don’t let humans be harmed, (2) Obey humans, and (3) Protect yourself.

The three laws are hierarchically ordered, so lower-numbered laws override higher-numbered laws whenever the laws are in conflict: for example, any action (or inaction) taken to obey the Second Law cannot interfere with obeying the First Law (but it can interfere with obeying the Third Law).

After exploring scenarios where AGI robot-servants interact with each other and society at large, Asimov added a fourth, or “Zeroth”³ Law, which can be summarized as: (0) Don’t let humanity be harmed.

Think what you will about how applicable these laws are to AGIs⁴; the Three Laws of Robotics do offer sound fundamental requirements for safely designing systems that serve our needs, while also encouraging the vital discussions and definitions that are needed to adequately implement those requirements.

Image by Lerey Eric

Adapting the Three Laws to Personalization

The Three Laws of Robotics were formulated as a set of instructions that were ‘hard-coded’ into every AGI robot-servant as rules for governing its behavior. Like a human, an AGI robot-servant could absorb the context of the situation it found itself in and then apply the laws to decide what to do next.

To date, the best personalized services are only Artificial Narrow Intelligence (ANI)² systems, nowhere near AGI-level sophistication. Despite how advanced they may seem, ANI systems are not capable of understanding the Three Laws of Robotics, because they inherently lack the ability to understand the context of any circumstance that is outside of what they have been explicitly programmed to do. In other words: ANI systems are roughly as self-aware as a microwave oven.

It is increasingly important for us to understand ANI system limitations and issues, such as unintended bias, because we are trusting ANIs to make increasingly important decisions, including who gets a job interview, who gets approved for a loan, who is a suspect for a crime, how long a convicted criminal is sentenced, and who is granted parole.

Because personalized services and ANI systems cannot think for themselves, the humans who build those systems must think for them.

Thus, the Laws of Mindful Personalization are intended for the humans who create personalized services, as guiding principles for how to create and maintain ethical and positive products.

With that in mind, let’s specialize the Laws of Robots to personalization.

Law #1: Don’t let humans be harmed → Don’t harm the user.

From an ethical perspective, it’s simple and obvious to say, “Don’t harm the user”. From an implementation perspective, it is very difficult to define what “harm” is for every user, because every user has unique circumstances and needs.

Let’s start with an example of how can a personalized service harm a user: Take Apple iPhones, which include Siri, an ANI personal assistant, as well as a plethora of apps and media that Apple personalizes for its users. Through personalization and design, Apple tempts its customers to use their iPhones frequently. However, Apple found that it has been overly successful because users, especially children, were getting addicted. While we can all agree that “addiction” is bad, the point at which “normal use” ends and “addiction” begins is difficult to define, and highly subjective.

It is important to note that well-implemented personalized services will tend to be addictive, by design. While a person’s well-being is generally his own responsibility, an addict cannot be trusted to make the best decisions with regards to his addiction; therefore, it is the responsibility of the business that offers the personalized services to do what it can to make sure its users are not being harmed.

How? Businesses can protect their users by actively responding to user concerns as they arise, actively ensuring that the personalized services are supporting people’s well-being, and actively checking for bias in their ANIs’ models and data. Businesses can also prevent harm from third-parties by guarding their users’ privacy. It isn’t possible to foresee all the ways in which a personalized service may harm a user, but it certainly is possible to minimize harm by acting as soon as any issue arises. Businesses who succeed at fulfilling Law #1 can look forward to having long-lived, happy, and functional customers who can continue to buy their products long into the future.

Law #2: Obey humans → Do what the user wants.

Personalized services are intended to delight you by figuring out what you want and offering it to sell it to you. The key here is that the personalized services need to figure out what you want; in other words, personalization must not be used to manipulate you into buying something you don’t actually want.

For example, a user may decide to entrust a personalized service to purchase recurring annual gifts for her loved ones. Applying Law #2 means that the personalized service must carry out the task given to it by the user (e.g., pick out the most awesome gifts), and not a different task set by the business offering that service (e.g., spend as much of the user’s money as possible).

Similarly, personalization must not be used as a vehicle for discrimination: no one wants to be treated poorly.

How? Businesses can be responsive to their users by developing personalized services that strive to provide value to all of their customers. The upside of this strategy is that products developed this way are more likely to become popular, to receive positive press, and to boost your business’s reputation.

Law #3: Protect yourself → Monetize effectively. This one is easy: companies won’t offer personalization unless it positively impacts their bottom line. Nevertheless, it’s worthwhile to explicitly reserve that right, because history has shown that companies can be forced to do things that aren’t good for anyone⁵.

How? Businesses can monetize effectively by investing in a sound data strategy with good metrics, efficient and scalable data infrastructure, advanced machine learning and ANI models, and agile deployment practices. This upfront investment can pay for itself many times over, as it attracts new customers, strengthens existing relationships, and keeps your business relevant into the future.

Like the Laws of Robotics, the Three Laws of Mindful Personalization also require a fundamental, zeroth law to prevent societal disaster:

Law #0: Don’t let humanity be harmed → Don’t let society be harmed.

In other words: don’t let the collective set of individuals using personalization result in an avoidable, bad situation. Recent revelations that YouTube is unintentionally enforcing radicalism, Facebook is creating ideological echo chambers to maximize ad revenue, and Microsoft’s facial recognition software was inadvertently racist underscore the absolute importance of this theme, as well as the moral, business, and technical challenges associated with fixing these problems as they arise.

How? Businesses can prevent harm to society by actively monitoring for potential issues, responding to issues as they arise, leveraging scientific findings and experiments to develop effective solutions, protecting user data, and having the courage to do what’s right, even if it costs a little bit more to do it that way. The upside of supporting the zeroth law is having a positive reputation in a functional and stable society. The risk of neglecting the zeroth law is arousing the public’s wrath.

The Three Laws of Mindful Personalization

In summary, the Three Laws of Mindful Personalization are:

1. Don’t harm the user.

2. Do what the user wants, unless this conflicts with the First Law.

3. Monetize effectively, unless the way you are monetizing conflicts with the First or Second Laws.

And finally, the one law to rule them all:

0. Don’t let society be harmed.

I’d love to hear what you think of this fresh twist on a familiar topic — it’s as important to have meaningful discussions on what the laws mean and how we’d implement them, as it is to have discussions on what the laws ought to be.

Please leave a comment below to continue the discussion. And, if you enjoyed this thought-piece, please share it with a friend, and applaud as you see fit.

— — —

Footnotes:

1. This is a generalization: die-hard Asimov fans may point out that there technically are three exceptions; however, none of those three are core to Asimov’s universe(s), so suffice to say: the vast majority of Asimov’s relevant works assume that the Three Laws are fundamentally required.

2. Some useful definitions + clarification:

  • AGI = “Artificial General Intelligence”, which is defined as a single computing system that can do any intellectual task that a human can do. These mythical, powerful systems are frequently viewed as both the means to a futuristic utopia and the harbinger of inevitable doom for humanity.
  • ANI = “Artificial Narrow Intelligence”, which is defined as a single computing system that can accomplish one narrow task or narrow set of tasks that we previously thought only humans could do. This includes machine learning- and deep learning-based algorithms, which are specialized to a certain task, like AlphaGo, or Google Search.

Note that both of these definitions are strongly time-dependent: technologies that would be classified as an “ANI” 15 years ago are considered “too basic” to be called an ANI today.

To illustrate the huge difference in capability of an AGI vs an ANI, here are two examples using state-of-the-art (2018) ANIs:

(1) Skynet from the Terminator movies (AGI) vs your personalized Netflix feed (ANI). Note that I’m not making fun of Netflix here — they’re doing really cool stuff in personalization.

(2) Data from Star Trek: The Next Generation (AGI) vs {Duplex, Siri, Alexa} (ANI).

3. While this is technically fudging things so Asimov wouldn’t have to re-number his previous laws, I’m sure that computer scientists in the audience will appreciate Asimov’s decision to implement proper zero-based numbering. Some refer to the amended laws as Asimov’s “3+1 Laws of Robotics”, or the “Four Laws of Robotics”; but let’s be honest: the “Three Laws of Robotics” sounds best.

4. It’s been posited that the Three Laws of Robotics are not ethical to apply to a fully-sentient being because they fundamentally assume that the beings they are applied to should be treated as less than human. While that is a reasonable argument to bring up with regards to beings like Data from Star Trek, or Andrew in The Bicentennial Man, that doesn’t apply to today’s machine learning and deep learning algorithms, which are still approximately as “sentient” as a desktop calculator.

5. Example: General Motors was forced to pay employees to not work, which was expensive for GM, and depressing for their employees, who wanted to work but couldn’t (in order to be paid).

--

--

Margot Kimura
Yoyo Labs Blog

Lead Data Scientist @yoyolabsio. Former PI in cyber, risk, and decision-making @SandiaLabs. UCSB PhD.