ICTC’s Tech & Human Rights Series

The Partnership on AI: A Multi-Stakeholder Coalition for Responsible Technology

A Conversation with Katya Klinova

ICTC-CTIC
ICTC-CTIC

--

The original interview took place May 7, 2020.

Katya Klinova is Program Lead at the Partnership on AI (PAI), a coalition of over 100 organizations from civil society, industry, and academia, where her work focuses on AI, Labour, and the Economy. Prior to joining PAI, Katya was at the Harvard Kennedy School of Government, researching the potential impact of AI advancement on economic growth trajectories of developing countries. Previously, she worked at the United Nations Executive Office of the Secretary-General (SG) to prepare the launch of the SG’s Strategy for New Technology, and at Google in a variety of roles.

Photo by Eduardo Santos on Unsplash

Kiera: Thank you so much for speaking with me today, Ms. Klinova! I appreciate your time. For our audience, can you tell me a little about the general work of PAI?

Katya: PAI is a relatively new non-profit that is growing very quickly. We think about “responsible AI,” how to implement the principles of responsible AI in practice, and how to live by them every day. We split our work into thematic pillars: some are focused on thinking about transparency, accountability and fairness of AI; others on documentation practices; others on media integrity, criminal justice, etc. My specific area of focus is labour and the economy.

Kiera: And how did you end up here, working for PAI?

Katya: I was trained as a mathematician and computer scientist, and I worked for Google for seven years. I then decided to go back to school to study economic development and to find people who professionally study poverty and inequality, particularly the relationship between technological advancement and societal conditions. My graduate work at the Harvard Kennedy School was focused on researching the implications of AI advancement for developing countries. After that, it was a seamless transition to join PAI because PAI conducted research on global consequences of AI advancement, which was what I wanted to do. I didn’t want to focus on a specific developed country, but rather to look at global repercussions.

Kiera: That leads perfectly to my next question. As you mentioned, your focus at PAI is on the “AI, Labour, and Economy” thematic pillar. Can you tell me about your current and past work in this area and why it’s important?

Katya: This thematic work has existed at PAI for a couple of years, so there is work that predates me. Last year, the PAI team published three case studies of companies from different sectors and countries. Each company had brought AI into their operations, and PAI sought to analyze what AI meant for their productivity and labour force. The publication represented a range of cases and highlighted both the challenges and opportunities that AI presented to the workforce. Continuing this work, we hosted a workshop in partnership with the Ford Foundation where we brought labour advocates, union representatives, industry, and academia together to discuss opportunities and challenges to workforce wellbeing, in relation to the introduction of AI into production processes. We are now preparing the report that builds on that workshop for publication. The report looks at human rights, financial considerations, emotional aspects, sense of purpose, and all the other ways that AI may influence workplace wellbeing.

That work is wrapping up. At present, we focus on two big initiatives. The first is looking at procurement practices and working conditions in data labelling, data annotation, and what’s called “human review,” where a human worker reviews AI-made predictions (especially when they are low-confidence). Much of this work happens through virtual on-demand crowd platforms, not traditional work sites, so we are looking at procurement practices and hoping to create resources for product managers or practitioners, guiding them on how to procure data labeling responsibly.

The second area we’re looking at is corporate responsibility and the role of the AI industry in ensuring AI doesn’t fail to benefit economically vulnerable populations around the world. We’re posing the question: how do we practically ensure AI doesn’t leave anyone behind?

Kiera: Do you focus on any geographic areas or is it truly global?

Katya: We are not currently restricting ourselves to any specific focus. When you talk about data-labelling crowd platforms, as an example, there are certain countries that have particularly large workforces that work on these platforms, including India, the Philippines, etc. But there are also very big workforces in North America (both Canada and the US). Even though there is a dearth of data on this, we know there are a lot of workers in North America on these platforms because many projects demand local expertise or an English-speaker. So this is a topic that is global, even if there are some countries that seem to have larger stakes in the question.

Kiera: One specific topic that you focus on is the impact of AI and intelligent automation on global inequality and economic development prospects. This is a big question, so perhaps we can break it up into a few, but what is the connection that you are seeing, so far, between technology, inequality, and economic development?

Katya: Competitive wages, or low-cost labour, have been a big part of countries’ development journeys so far. They are one of the main comparative advantages that has enabled countries to compete in export markets.

If before it was only possible to automate tasks that can be broken down into a precise sequence, nowadays AI has dramatically expanded the possibilities of automation. We can now automate tasks that are as complex and unstructured as driving. This is a big expansion, and against that backdrop, it is reasonable to expect that competitively priced labour will diminish in importance as an advantage, and will not be as big of a source of economic growth. This is problematic because many developing countries have huge young populations, and every year there are millions of young people entering the workforce in need of gainful employment and work. AI is already reshaping the labour market young people are entering. How can we use AI to expand their economic opportunities and expand the demand for them in the labour market? That is a big question that more and more development professionals are turning toward.

Kiera: This labour question is also a huge issue in developed countries, with AI transforming the labour market.

Katya: Absolutely. In developed countries, there are also large educational inequalities and populations are aging faster, so this question is very present. Overall, however, the impacts of AI on the labour force may be less critical for a developed country’s economy as a whole because the economy may be more diversified, have more sources of income, and have high-tech export-oriented industries that weren’t relying on low-wage labour to be competitive. These are the reasons why there are potentially more challenges in developing countries; however, this is not to say that AI is not a potential issue for the labour market in developed countries as well.

Kiera: AI, as it advances, will influence the distribution of jobs and nature of work in Canada and globally. AI advances can inject great value into the economy, but they can also cause disruptions as new kinds of work are created and others become less needed. Another area you look at is AI and labour. Can you highlight the most pressing disruptions and/or issues relating to AI and labour at the moment in Canada, the US, and/or globally?

Katya: I group the impact uncertainties into two buckets of questions: one question around labour demand and another question around quality of jobs. On the former, the question is whether the labour demand is going to go down for certain groups, and for whom? From historical experience, technology usually automates some tasks but also creates new tasks, but the question is: do these processes balance out?

Research by Acemoglu and Restrepo for the US shows that in the decades following World War II, automation and task-creation balanced out nicely. But that balance has tipped toward automation that has been accelerating, while the creation of new tasks has slowed down in the past three to four decades. But we also need to ask: “Who are these tasks for? What tasks are we automating, and whom are we taking them away from?” If we are automating tasks that don’t require college degrees and only creating tasks that require select college or graduate degrees, then we are creating a skills bias. This is the kind of technological change that advantages the highly skilled and those with a lot of educational attainment while disadvantaging people who did not have the resources to acquire that education. Here then, educational efforts become even more important, and we have to be realistic about how quickly we can ramp those educational changes up. For whom is upskilling or retraining available? How flexible is the labour market? If we make changes faster than people can adapt to them, or if people do not have the resources to adapt, it will be a difficult transition for entire groups within society, independent of whether they are in developing or developed countries.

The second bucket of questions is around the quality of jobs. For example, quality in data-labelling jobs is an issue as companies rely more and more on contingent, temporary workforces. It’s beneficial for them to bring in a workforce only when they need them, but at the same time, our societal structures are not set up to support workers involved in those kinds of work. Where worker benefits, healthcare, and pensions are tied to an employer, it is very important to be a full-time employee, so the question of portable benefits and other support structures for crowd platform workers is very important and needs to be addressed.

Kiera: Has PAI identified any best approaches to minimizing such disruptions and ensuring that the fruits of AI advances are widely shared, and that competition and innovation are encouraged and not stifled?

Katya: In our projects, we focus on the roles and responsibilities of companies, rather than policies/policymakers. Continuing on from above, where I spoke about the issue of the quality of work, we ask: “What are the responsibilities of companies that procure this type of ‘contingent’ work?” Also, when they build their AI product, how can they think about the second- and third-order effects of their actions on the structure of the economy, whether and why it makes society more or less equal, on who benefits, who is hurt, and why certain people are hurt? These are very difficult analytical questions to think about. There are no established practices around how to approach these, so developing analytical frameworks to address these questions and think about second- and third-order consequences is something we are working on. It is a long-term project.

Kiera: Are companies generally positive in their response to the need to consider and tackle these issues?

Katya: Companies that have joined PAI have joined because they already recognize the importance of these topics. Of course, these topics are difficult to grapple with, but they’ve recognized that they need to. We do not just impose recommendations on them; it is a multi-stakeholder process, so we try to really understand the challenges that companies face, to understand current practices and what can be done, and then where can we go from here, and how do we get there? Robust partner participation is central to this work. It’s not easy work. There are no easy prescriptions, but our partners are dedicated to this struggle to find good practices and recommendations.

Kiera: Given it is so topical at the moment, do you have any thoughts on long or short-term risks related to COVID-19 with respect to technology being used?

Katya: I can comment on the way it reflects in my own work. First, I anticipate acceleration of automation. Because of COVID, the lockdowns, and generally increased caution around human-to-human interaction for unknown periods, we are seeing increased demands for robots. Second, there will probably be a decrease in migration. This has implications for AI because the availability and price of labour create incentives for automation. AI development is currently concentrated in a handful of countries where populations are aging, and many of these same countries are quite restrictive around incoming migration, so there are incentives to automate more in these countries. This acceleration of automation might spill over to the rest of the world, even where labour is available and abundant.

Kiera: Looking forward, what do you think the major challenges will be for countries in relation to AI? What will be the hardest problems to address? Or what is one “surprising outcome” that you think AI will have in the next five to 10 years — for example, in an area that we aren’t paying adequate attention to or aren’t predicting well?

Katya: If we could predict it, it wouldn’t be a surprise. But I can tell you what surprise I would find very pleasant. Currently, there is some work in AI and education, but I really hope for a breakthrough in this area because we are so not on track in terms of improving educational outcomes around the world. We have made global strides in putting kids to school, but this doesn’t always translate into great learning outcomes. So if AI is able to bring in individualized learning and make robust education a reality, that would mean so much and would resolve so many worries around our ability to train the workforce and prepare the workforce to take advantage of technologies. That is a surprise I am hopeful for, but it is hard to say if it will materialize.

Kiera: From your perspective, how does Canada compare to the world in AI development and policy?

Katya: I look at Canada with so much hope. I hope Canada will lead the world and be a model for us in inclusive, green economic growth, whereby AI is used in a healthy way to increase productivity.

In particular, I think that Canada has healthier incentives for AI development because it is much more ready to embrace migration. Canada has a very high talent pool (computer scientists, economists, etc.) and everything it needs to develop AI, in addition to healthier incentives. So it has a chance to develop AI that is not skewed toward emphasizing labour-saving applications and economizing on labour as a resource, but rather focused on creating AI that addresses planetary and societies’ real scarcities and needs. I think Canada has fewer political roadblocks for this journey, and I have high hopes for Canada.

Thank you so much for your time! It was a pleasure to speak with you.

Katya Klinova: Katya Klinova leads AI, Labour and the Economy research programs at the Partnership on AI. Prior to joining PAI, Katya was at the Harvard Kennedy School of Government, researching the potential impact of AI advancement on economic growth trajectories of developing countries. Previously, she worked at the UN Executive Office of the Secretary-General to prepare the launch of the SG’s Strategy for New Technology, and at Google in a variety of roles.
Kiera Schuller: Kiera Schuller is a Research & Policy Analyst at ICTC, with a background in human rights and global governance. Kiera holds an MSc in Global Governance from the University of Oxford. She launched ICTC’s Human Rights Series in 2020 to explore the emerging ethical and human rights implications of new technologies such as AI and robotics in Canada and globally, particularly on issues such as privacy, equality and freedom of expression.

ICTC’S TECH & HUMAN RIGHTS SERIES:

ICTC’s Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.

--

--

ICTC-CTIC
ICTC-CTIC

Information and Communications Technology Council (ICTC) - Conseil des technologies de l’information et des communications (CTIC)