AI Targets Yo-Yo Ma Not Elon Musk

Why we don’t have robot lawyers

Ken Grady
The Algorithmic Society
6 min readJul 12, 2017

--

Yo-Yo Ma plays with the Chicago Symphony Orchestra led by Daniel Barenboim

Everyone knows Yo-Yo Ma, the famed cellist. Identified at a young age as someone with incredible talent, he has spent his lifetime mastering his music. To this day, he still practices up to six hours a day when he is not performing.

Elon Musk, though much younger, is as famous as Yo-Yo Ma. Elon’s skills cross many domains. He was one of the stars behind PayPal. He founded Tesla and SpaceX. His newer ventures include The Boring Company and Solar City (now part of Tesla). His curiosity that spans many areas, though his interests seem to coalesce on great challenges facing the world (and, of course, making some money along the way).

How is it, then, that AI targets Yo-Yo Ma, but not Elon Musk? Musk is the one whose businesses depend on science and in many cases directly on AI. Ma is an artist and makes no use of computers in what he does. He is in some ways the antithesis of technology. He once performed for the BBC, playing six Bach Cello Suites, each consisting of six movements. The performance lasted almost three hours, consisted of Ma sitting on the stage performing by himself, and included only one brief break.

AI Is More Ma Than Musk

AI today is a one-trick pony. We train it to do one thing and do that one thing incredibly well. It is the young Ma, learning each day how to do a bit better at whatever it does.

AI enabled robots do not have the dexterity of Ma. They cannot play a cello. But AI can play chess, Go, and Jeopardy. It can learn, improve, and learn more. It can develop a proficiency level that exceeds that of any human.

Ask that same AI that plays chess better than a human to drive a car, and it fails. It has no knowledge of driving. It must start over, from scratch, and learn a new skill.

Can it learn to play chess and drive a car? In theory, yes. But while a human could sit in a car and play a game of chess as he drove down the road (not recommended, of course), this would be an interesting and challenging blending of skills for AI.

Now add another challenge to the mix: the AI must drive, play chess, and carry on a conversation. Again, our human could do this (and again, not recommended). The human may not do any of the three tasks well as she constantly shifts attention among the three, but she could do it. Not only could she do it, but she would not require any training. Having never done such a thing, she could on a moments notice do all three. Our AI can’t do two and would require much effort to get to all three.

Elon Musk is that person driving, playing chess, and carrying on a conversation. He is constantly switching among his various enterprises. Along the way, he does the tasks required of anyone (e.g., eating, sleeping, bathing, getting dressed). And, at a moments notice, he can pick up a new task.

Ma could be Musk and Musk could be Ma. Ma could choose to perform, compose, and conduct. Musk could choose to focus on the Tesla and nothing else. That is, each can monotask or multitask at will. They have the skill to do so.

Getting From Ma To Musk

Going back to our AI, we can now see part of the challenge for it to be that over-hyped tool we read about. Doing one thing well is hard enough. Doing the full suite of things that humans can do, switching among skills and combining skills, requires a whole new level of proficiency.

Now, scientists are hard at work trying to get AI to this new proficiency level. Think of the problem they face. Neither Musk navigating his business duties, nor that crazy person driving down the road playing chess and talking, uses completely separate systems for each task. The systems are somehow linked.

Your brain somehow can handle multiple learning streams and multiple requirements, without having a system for each one. They must be interconnected.

The name of the game in AI right now is something called a “neural network.” In very simple terms, a neural network began by attempting to mimic the human brain. It was made up of layers and each layer handled learning at a different level. Some levels handled granular learning, and some handled coarse learning.

To those first models, computer scientists added features not found in the human brain, but which improved the performance of neural networks. The goal was to increase the neural network’s performance on one a task. It was as if the scientists went back to the Ma concept — become very good at one thing.

But this still left a problem. How does a human do many things? One solution receiving attention to do is the “network of networks” approach. Set up one neural network, then another, and another. Now, put a network on top of the networks. The meta network helps transfer learning among the networks.

Using our driving-chess-talking example, there is a network for each of those activities and a network that sits on top of the three networks. The meta network tries to transfer learning from driving, to chess, to talking. All three get better as any one learns a new way to do something. Learning becomes a transferable skill.

Beware The Robot Lawyer Hype

There is no area which receives more AI hype than law. Only a few law firms have experimented with AI tools, and yet read the headlines and tomorrow your lawyer will be a robot. Movies have set expectations far beyond reality.

A lawyer is more the person driving-playing chess-talking than the cello playing Ma. Lawyers employ many skill sets. We can imagine a lawyer who spends his whole life mastering one skill — legal research. But, we know that such lawyer does not exist.

Lawyers are more like Musk. They must jump from thing, to thing, to thing. The robot-lawyer must blend research, with drafting, with client counseling (or we must have many robots, each devoted to one task).

AI is moving along the continuum from monotask to multitask. The hard separations that once existed have softened. For example, AI can take a natural language worded query, search through its data, and come back with an answer. The accuracy still varies (as it does when humans perform the task), but it is improving.

Ask that same AI to extract a principle from a case and then apply that principle to a different set of facts in another area (say, property ownership rights from real estate to intellectual property), and the AI will have great problems. An associate can perform the task, but AI cannot.

The Challenge That Lies Ahead

Of course, no computer can match Ma or Musk. We know that Ma can multitask very well and do things no computer can match. Ma and Musk are more alike than any computer and either Ma or Musk.

The challenges for humans are several. First, how to integrate computers with what we do. Where can we benefit from a tool that can do a task better than we can? What do we do with the time we gain?

We also must learn how to be the meta system for some time. How do we integrate what various AI systems provide? This raises the biggest near term issue that affects everyone, including lawyers. What are the rules governing society as we move from humans only to humans plus AI? The world will not be human driven vehicles today, autonomous vehicles tomorrow. We won’t have human caregivers this week and robotic caregivers next week. We will have a long period where we have hybrid societies — humans mixing with AI. That is still a challenge for humans to solve.

If you enjoyed reading this article, please recommend (click the heart) and share it to help others find it!

About: Ken is a speaker and author on innovation, leadership, and on the future of people, process, and technology. On Medium, he is a “Top 50” author on innovation, leadership, and artificial intelligence. You can follow him on Twitter, connect with him on LinkedIn, and follow him on Facebook.

--

--

Ken Grady
The Algorithmic Society

Writing & innovating at the intersection of people, processes, & tech. @LeanLawStrategy; https://medium.com/the-algorithmic-society.