New! AI-Enabled (Gluten Free) Sliced Bread

Does every product need artificial intelligence

Ken Grady
The Algorithmic Society

--

There is an old saying in the newspaper business about bad news selling newspapers. Of course newspapers, with bad or good news, are mostly gone. But the modern equivalent is click bait. Wild headlines get more clicks. So, it was with some amusement I saw this clickbait, “By 2020, every single product and service will have artificial intelligence.”

I like a good clickbait headline as much as the next person, so I went into the article. I discovered, not to my surprise, that the author was making this one up. No citations, but some standard stuff about AI replacing jobs, over-hype of AI, and the worry that we don’t have enough individuals trained in AI to fill the demand.

Read the article and you will see a reference to “AI washing.” In the hot field of made-up terms, appending “washing” to something (e.g., “data washing”) indicates the washer is over-hyping the thing: data, AI, whatever. So, in this data age, you better have a lot of data even if you don’t. Claiming that you do in that situation is data washing. Claim AI in your product, when “not really” is the better description, and you are AI washing.

We have seen this in the legal industry, where the standards for actually having what you say you have in marketing are a bit lax. Project management gained popularity with clients, and pretty soon law firms were project management washing. Someone was given a title (“project manager for information flow,” formerly the mail room supervisor) and then the firm could check yes next to the “do you use project management” entry in an RFP.

Lawyers can wash with the best, so AI washing has come to the industry. Law firms proudly announce they are using AI and startups exclaim that their products are AI-enabled. Good stuff! But, getting lost in all the laundry is an interesting question: When should we not AI enable something, even though we can?

Just because we can do something doesn’t mean we should do it, even if doing it would be beneficial. I can eat chocolate and I would benefit from doing so, but that doesn’t mean I should eat chocolate. When it comes to chocolate, the consequences of eating the chocolate are minor and can be controlled. When it comes to AI, we don’t have to go to “end of the world” scenarios to find bad.

Causation Versus Correlation

While preparing for a legal conference on the use of data in law, I was asked about the causation versus correlation question. Do practicing lawyers care about causation, the key question asked by academics, or are they primarily concerned with correlation? Most practicing lawyers care not a whit about causation.

The same focus is true outside law. In many (most) cases, AI is used for correlation. If we do X and behavior Y results, and behavior Y is good for us, then we should do more X. How X and Y are linked isn’t of interest, only that pulling on the chain causes the marionette to dance.

For many things, using AI to discover heretofore unrecognized patterns — interesting correlations — seems okay. In many cases, it can be good. AI can help us avoid doing things that lead to injury or death. In many cases it is neutral. AI can write a sports article using Ring Lardner-ish style, and I find that interesting, but neither good nor bad mimicry.

That leaves us with the many things AI can do that raise the original question: we can, but should we? A second question is, “if we should, what should be the boundaries.” Another interesting area is AI safety: what happens when we intend AI to do one thing, but it does another (lawyers: notice the purposeful use of the word “intend”).

These sound like the types of question lawyers would like to ponder, curled up with a big stack of thick books and a steaming cup of tea. The universe is not lacking for think groups focused on the scary AI question. But it does seem lacking for some AI governance thinking. As an article in Vanity Fair about Elon Musk’s OpenAI says:

So far, public policy on A.I. is strangely undetermined and software is largely unregulated.

At the University of Oxford, there is Future of Humanity Institute. Cambridge, Massachusetts has the Future of Life Institute. Elon Musk funded OpenAI. The University of Cambridge has the Centre for the Study of Existential Risk. Eliezer Yudkowsky co-founded the Machine Intelligence Research Institute. The world has the virtual Global Catastrophic Risk Institute. And then there is the new Partnership on Artificial Intelligence, founded by the who’s who of tech companies with some interesting players, such as the ACLU, thrown in for good measure.

These groups have philosophers, mathematicians, physicists, and data scientists. But the glaring (in my opinion) exception is lawyers. We are thinly represented in these groups. As a profession, lawyers are mostly absent from the key discussions, though not absent from scholarship (there is a growing gaggle of legal scholars addressing relevant issues).

The worry (channeling Sean O’Connor from The Hunt for Red October) has been the subject of articles in publications from the very techie oriented to Vanity Fair, which published a neat graphic.

Source: Vanity Fair. Photographs by Anders Lindén/Agent Bauer (Tegmark); by Jeff Chiu/A.P. Images (Page, Wozniak); by Simon Dawson/Bloomberg (Hassabis), Michael Gottschalk/Photothek (Gates), Niklas Halle’n/AFP (Hawking), Saul Loeb/AFP (Thiel), Juan Mabromata/AFP (Russell), David Paul Morris/Bloomberg (Altman), Tom Pilston/The Washington Post (Bostrom), David Ramos (Zuckerberg), all from Getty Images; by Frederic Neema/Polaris/Newscom (Kurzwell); by Denis Allard/Agence Réa/Redux (LeCun); Ariel Zambelich/ Wired (Ng); © Bobby Yip/Reuters/Zuma Press (Musk).

Look at the pictures and, with the exception of Peter Thiel who is acting as a technologist and financier, there isn’t a lawyer among them. Yet, AI raises some of the most interesting legal questions of our time, including how to govern a hybrid society of people and machines.

We can start the list. What role should AI have in the criminal justice system, that system which can have a deep and profound impact on any person’s life? Should we turn over questions of criminal law to AI? Should AI play a role in deciding who does and does not go to jail? Under what circumstances someone will be released for jail? Who poses a threat to society? We can number crunch these questions, but they raise raise many issues outside of Bayesian probability.

Many articles have been written about using drones in warfare. But we don’t have a group focused on the global governance questions around drones. Take the clickbait headline. As we have greater numbers of goods and services interconnected, who controls the “army of one.” Presidents may think they are the leaders in chief, but the persons who control that army can do much more than start a war, they can daily manipulate our lives.

It is easy to go scary when talking about AI, and it is equally easy to dismiss the scary stories. The world today is a delicate balance of societies governing themselves and attempting to maintain or improve the balance. This is governance, governance has at its core the rule of law, and the rule of law has something to do with lawyers. Time to step up, folks, and participate in the discussion.

If you enjoyed reading this article, please recommend (click the heart) and share it to help others find it!

About: Ken is a speaker and author on innovation, leadership, and on the future of people, process, and technology. On Medium, he is a “Top 50” author on innovation, leadership, and artificial intelligence. You can follow him on Twitter, connect with him on LinkedIn, and follow him on Facebook.

--

--

Ken Grady
The Algorithmic Society

Writing & innovating at the intersection of people, processes, & tech. @LeanLawStrategy; https://medium.com/the-algorithmic-society.