A Philosopher of Science is Helping Shape the Ethics of AI

Simon Hudson
Cloud&Co.
Published in
7 min readMar 2, 2017

In previous industrial revolutions, we had decades to adjust our schools around new economic and technological forces; but, now, half of what students learn in their first year of college may be outdated by the time they graduate. Realizing this, perhaps the most important skill for the upcoming AI revolution will be the ability to learn new skills.

One silver lining of job automation by AI is that it frees people up to do more customer service, surprisingly adding more of a human touch. Harvard Professor of Public Policy, Education and Economics, David Deming, has been noted for pointing out that skills learned in preschool — like sharing, negotiating, empathy and cooperation — will be crucial for competing in the changing job market… Polish up those people skills.

Photo by Lewis Hine

Another place looking at how AI will change society is the University of Montreal (UdeM), home to many of the world’s leading AI researchers. Frédéric Bouchard is a philosopher of science at UdeM, and part of his job is to match these computer scientists with social scientists at the University. “What’s exciting in Montreal is you have a critical mass of people from different fields who can think about these questions and do it right,” he says.

Given his role, Bouchard has a unique vantage point to see AI’s development. “I’m not a futurologist; but looking at the technological development and the social uptake to different markets, we’re talking about twenty years, tops [before it’s everywhere].” He believes that we should not have to wait for the technology to be up and running to start asking the right questions. We asked him to tell us more about his role in getting these questions answered.

How did you become interested in AI?

Bouchard: I became personally interested in AI when one of my students was working on a master’s thesis on evolutionary robotics. They were examining how you’ve got self-constructing systems that seem to follow evolutionary laws.

Given my background is in evolutionary biology and ecology, I’m interested in what can evolve — what are the rules of evolution, if you will. And this is a big question about AI or any complex system: Do they behave as if they were living organisms or do they behave according to their own novel rules?

Frédéric Bouchard, photo by Kara Turcotte

So we’re only 20 years off before all the manufacturing jobs are taken by robots?

Bouchard: Forget robots. I actually think that the greatest changes with big data and AI will be white collar jobs. Educated people tend to think that their job is not replaceable; and that’s myopic — educated people are the most expensive employees. They’re the ones most investors would want to replace.

[Author’s note: The World Economic Forum (WEF) predicts that a mix of artificial intelligence, robotics, nanotechnology and other socio-economic factors will lead to as many as 5 million jobs disappearing from the world by 2020. Though, for the same reasons, they also say that up to 2.1 million jobs could also be created. On a global scale, 5 million jobs is not huge, but that’s only jobs that are completely disappearing. Much more significantly, on average, 35% of jobs in every industry will have their core skillsets shift (the highest is the financial sector at 43%).]

It’s not about optimism or pessimism, because these developments will mean job loss but also job creation; but we want to have an idea of how we would like to transform work. I see a lot of positive paths to develop these technologies in ways that actually improve the human experience — I mean in a philosophical experience, not user experience. Just saying “no” to this — becoming Luddites — is ridiculous because there will be many uses where we could all benefit.

How can we develop this technology to behave in a positive way?

Bouchard: These are not purely technological questions and answers. These are human answers. We need to figure out how to get people together from humanities, social sciences, medicine, engineering, computer science and so on, and make sure that they have places to exchange on these issues.

It’s very special and exiting. It’s also troubling, because nobody knows what’s going to happen. How will it affect human and social relationships? How do we do it in a way that feels right? And then you have to define what is “right”.

And it’s not just legal. Part of it is economics — consider the price of sensors. AI relies on “big data”, which depend on trackers and sensors. Cheap sensors mean ubiquity. So economic forces will in part determine what AI can actually ‘think’ about. But sociologist will need to figure out who is tracked and measured by these sensors. If you design a policy using a set of big data, but you realize that it only talks about rich white people, then that policy will be constrained by that data.

So, you need many different types of expertise to really make sure that the development is done properly. Otherwise, we’re just naively playing with really sharp knives. We need to ask: “What is a knife for?” “Who should we give it to?” “Who should decide what it’s going to be used for?”

If we wait after the technology is developed to think about these things, it will be too late to develop it in a responsible way.

What is your role in answering these questions?

Bouchard: As a philosopher of science, I’m in a unique position because we talk to scientists all the time. We try to understand what they care about and we can translate their worries — philosophers of science can be brokers of expertise. So, I can determine whether a question has a legal aspect, ethical aspect, technological and/or political aspect. Then, it’s finding the right people who will fill the required needs for answering the question.

When you think about it, that’s actually the ideal purpose of universities: to have these tables for discussion where we get important answers. It’s a feasible dream, having this lunch table; and you’ve got a programmer, a bioethicist, a labour expert, a political scientist, and they all have lunch and discuss what’s happening. Chances are, if we develop these spaces for discussion, it will transform how we develop things and incorporate them into our practices.

Then your position is actually putting a “table” together.

Bouchard: More seeing that a table is needed, then knowing who should be invited. And then I can help find people and I initiate and maintain the conversation. What I’m describing is just a university, right? The social promise of universities is that if you let us have these very special structures, organizations, we’ll come up with the ideas that cannot come up anywhere else — so universities have a special role to play, institutionally.

It’s an organic thing. You can’t just do it at one-off events, like a big summit or conference. I mean, it’s helpful, but it’s not how you actually do it. The way you do it is through exchanging ideas continuously.

That’s why scientific breakthroughs happen in certain places — they had the right mix of people, the right goals, the right conditions. No single sector of society will solve this on its own — it cannot be purely driven by governments or universities or corporations. Why? Because these questions are huge — we are not trying to figure out how to build a juicer. These are incredibly complex systems that are well beyond an individual sector of society doing it right on its own.

It may not be as critical as climate change, because climate change is about survival. But this is as global as climate change.

Take the last year in politics. Now think of the Facebook algorithm for that determines what news you see. I don’t know the inner workings; but I assume it’s not that complicated — it’s just induction on past behaviours — who likes what? But the result of those algorithms is that we are in very closed-off, homogenous, intellectual communities.

On your Facebook page, which covers most of your social interactions, you’re with like-minded people. That’s a technology. And people say, “they [Facebook, etc.] decide what you like,” but that wasn’t really the point. They didn’t care about that; they just wanted you to stay on. And then you’re stuck in an echo chamber. That is what happens when technology is developed without an eye to societal consequences.

Considering the speed of progress in the field of AI, would you say these conversations are starting late and we’re playing catch-up?

Bouchard: We are not late, but we are not early. I think we need to do this now. In a way, you can never be late because ultimately politics can stop anything. This is not a force of nature; this is human technology that’s being implemented following certain laws and regulations, or the absence of laws and regulations.

I like to say I’m an optimist, but I remain vigilant. Optimism doesn’t mean good things will happen — optimists think that good things can happen. And that’s where you need to be vigilant and make sure that smart people are around the table exchanging ideas and doing it right. In 20 years, we’ll still be arguing about these things — but we’ll say, “Man, we dodged that bullet. We did this; and who would have thought that we would get here?”

////

Originally published at www.cloudraker.com. Special thanks to Katharine Dempsey and Nathalie Williams in putting this together.

--

--