Is Artificial Intelligence let loose on the world of work?

TUAC OECD
Workers Voice @ OECD
11 min readJan 15, 2020

Organisational change through Choice, Co-creation and Consultation

by Anna Byhovskaya ( @AnnaByhovskaya )

What would happen to the way we live and work if Artificial Intelligence (AI) powered systems were run on minimum standards with inadequate supervision and consultation mechanisms? Such “AI let loose” scenario would most likely entail serious downside risks to the safety, the income and the well-being of many. Occupations and working conditions in particular would change dramatically. They already do. AI is spreading fast across different sectors. As we know, algorithms are heavily used on online platforms (urban transport, hospitality, etc.), in human resources and for professional appraisals.

A lot of benefits could come from AI: productivity gains through greater precision and more security and insights based on analytics. For workers, AI could bring ad-hoc training through Augmented or Virtual Reality, higher occupational safety with digital twin systems running in the background and, if all goes well, adequate wages and/or reduced working time. This scenario is within reach but only of the right choices are made when designing and applying AI systems. To reap the benefits, it important to understand digitally networked systems and the impact on workers.

Relying on so-called ‘human-centred’ design and ethical considerations will not be enough. Robust policy frameworks and targeted investments are needed to protect quality jobs and human autonomy. A cautious steering — if not a precautionary approach? — must prevail in the design and operation of AI. Citizens need to have assurance that somebody — and somebody that is driven by public interest –shapes its deployment and use responsibly.

For this to happen, actionable frameworks based on Choice, Co-creation and Consultation need to be carried out — especially when it comes to organisational change.

Getting hold of a busy AI Space

Breakthroughs and the diffusion of AI are not expected to slow down anytime soon. Unlike commodities and inputs of traditional “brick and mortar” businesses, data — the core element on which machine learning and algorithms are based — knows no degradation and can be recombined, tailored and versioned infinitely. The pacing challenge is real. AI and those driving it move fast. Money goes where AI is : “AI start-ups have so far attracted around 12% of all worldwide private equity investments in the first half of 2018, a steep increase from just 3% in 2011” according to the OECD. Not only is the AI market booming, it is both geographically and economically highly concentrated. The US is leading on both public and private investment, closely followed by China. In 2019, the top 250 companies account for 70% of R&D spending, 65% of patents filed, and 40% of trade marks (see the World Corporate Top R&D Investors). Top sectors aside from ICT, are transport, electronics and machinery. Finance and insurance also see great increases, not at least in view of trademarks filed for FinTech.

Irrespective of whether AI development slows down or not, it is already crunch time to tackle some key policy issues including:

● The market concentration of leading firms and investors;

● The misuse of AI against democratic principles;

● The opaqueness of AI-design and the challenges of privacy, bias and discrimination; and

● The lack of inter-operability and cross-border governance solutions.

The AI policy space is crowded. There are around 50 private sector led initiatives out there already that issue guidelines and principles.

The Partnership on AI probably being one of the most prominent, now expanded to over 90 members (including one global sector union). The technical community is developing its own standards (e.g. the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems), so do CSOs (e.g. Universal Guidelines for Artificial Intelligence) and academic institutions (see the Montreal Declaration). The EPFL International Risk Governance Center for example in discussing the governance of decision-making algorithms issued recommendations such as that the “governance of DMLAs must consider existing regulations and key benchmarks, against which DMLAs’ performance must be calibrated”. Some private sector actors are self-setting codes of conduct — see IBM, Microsoft with its guidance for bot designers or PwC’s Switzerland’s model for AI interoperability:

In this crowded space, policy makers need to assert themselves and civil society stakeholders need to get hold of standard setting processes. At the international level, the EU and the OECD brought together multi-stakeholder expert groups to set guidelines and principles on the design and use of AI — with social partners on board (myself on behalf of the TUAC together with UNI Global Union in the OECD process). The 2019 OECD Ministerial went on to adopt the first intergovernmental Recommendation on Artificial Intelligence (since endorsed at the G20 level). This is quite important considering its geographical reach. A follow-up roadmap to make the principles a reality could compensate for the hitherto lack of governance.

What is special with the OECD Principles besides a thorough coverage of technical aspects, is that they feature an inclusive growth and labour market dimension including a principle on the rights of workers to a fair transition:

“Governments should take steps, including through social dialogue, to ensure a fair transition for workers as AI is deployed, such as through training programmes along the working life, support for those affected by displacement, and access to new opportunities in the labour market” and should work closely with stakeholders to “aim to ensure that the benefits from AI are broadly and fairly shared.”

… and responsible management at the workplace:

“Governments should also work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers and the quality of jobs, to foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are broadly and fairly shared.”

On paper, such provisions (hopefully) create larger awareness on the link, and indeed conditionality, between investment decisions into R&D, training, infrastructure and effective use of AI on one hand, and employment regulation and active labour market policies on the other. More broadly the Principles confirm that AI diffusion has to be dealt with in a cross-policy and a multi-stakeholder, bottom-up manner. Top-down corporate AI strategies with no, or too little third-party check, will be insufficient to ensure that labour market and broader public good objectives are met.

AI in the world of work: Organisational change done right

Choice is key

Somebody who makes 5 dollars a day will now be replaced and considered as too expensive” — this is according to Volker Hirsch at his TEDxManchester talk. Hirsch refers to the 60.000 lay-offs in Apple contracted Foxconn factories but also to the builder robot that combines all the inventions of the last years, never gets sick and works endlessly. This present day examples are not wide-spread — yet — and depend on choice — amongst other regarding new roles for workers alongside AI powered systems. Others adopt a “Human plus machine equals superpowers” point of view and count on job creation. This goes into the trainer/ explainer scenario that does not necessarily acknowledge the consequences for those who are not in the IT sector or highly skilled, likely ‘younger’ and residing in industrialised countries able to assume new roles easily.

To understand on-the-job impacts, one needs to look into the AI design-to-implementation cycle and the interplay between advanced robotics, sensor devices and decision-making systems (see here for example). One also needs to consider what AI systems are already capable of, such as speech, language and face recognition, and predictive capacities (see for example this BBC’s infographic) and anticipate its future potential.

The way in which AI is deployed (or any type of new technology for that matter) is ultimately a matter of choice: choice to do it in the first place, but beyond, choice of how to do it. Take the KPMG International’s predictions on how AI could fundamentally alter occupational functions in Human Resources (HR) — as shown below. This depiction is not too far-fetched. And yet, the question remains if indeed all of these tasks should be automated in the first place.

Surely, “time and attendance” or “data management” can be effectively managed through an automated system. The answer is far less straightforward for other tasks such as “workforce planning” or “talent identification” for which human oversight and judgement surely would remain desirable. There is a distinction to be made between AI as a facilitator for decision making and implementation and AI as the decision maker. And workers are just saying that: a recent survey administered by Gartner shows that a “majority of employees (52%) say they would prefer AI to be deployed as an on-demand helper — essentially, acting as their own employee — rather than as their manager (9%), coworker (11%), or proactive assistant (32%)”.

Choices also matter when it comes to creating new job opportunities by empowering people to enter new sectors or professions that in the distant future are not prone to be fully automated. Recent PwC research pointed to the fact that with the right investments: “In the UK, by 2037, AI will create more jobs (7.2m) than it displaces (7m) by boosting economic growth.” Industrial policy and investment decisions could aim at creating jobs in new services, the care economy and in renewable energy — not least considering pressure coming from the joint pull of climate and demographic change. Of course, investments into workers’ access to skills and to lifelong learning will be necessary. There is a fairly good amount of consensus around that. The question is whether we walk the walk. Research by Accenture reveals that “business leaders don’t think that their workers are ready for AI” and yet “only 3% of those leaders were reinvesting in training” (via Wired).

Co-creation is key

A human-centred approach to AI need to be rooted in co-creation. Developers, public and private employers and workers should discuss why and how AI systems are deployed in a sector or at a workplace. The most important aspect here is, yet again, the willingness of the parties to do so from the beginning — when systems are about to be introduced. This means straying away from top-down, command and control approaches. Instead, organisational change should be guided by the goal to safeguard autonomy and self-determination of people working with new systems. This concerns workers’ occupational health and safety, their right to transparency, information and redress. Explainability and/ or the right to explanation would resolve some of the issues. Complementary decisions might need to be made around working time and training. This is about securing job quality with AI — not the other way around.

Employers start using programmes furnished by analytics companies to measure performance and make HR decisions — not only is bias probable, but concerns arise over transparency over decisions derived from such systems and over the privacy and protection of workers’ data. In its 2018 report, the AI Now Institute calls on “technology companies to go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.” Software such as LaborWise offers productivity analytics that “help identify areas where labor costs are too high, impediments that slow people down, and departments that need additional staff” (source). If such programmes are used without workers in the know, it is bad enough. But as a next step, if no provisions are set out to protect workers against consequences, to provide them with possibilities of redress and information and consultation — the power of the employer sooner rather than later will be perceived as “Big Brother” like. Instead, workers should be in the know and have a say about the purpose and use of new hiring and assessment tools.

Alternatively, unfair treatment and constant stress over being monitored will only rise. Creativity will likely go down, and employee burn-out rates go up. This goes against predictions that we need more creative, pro-active employees. And, ultimately, it goes against new societal thinking around work-life balance and collaborative workplaces. Too much monitoring and recruitment based on key words is thus not future proof.

Consultation is key

Any technological transformation of the economy — including the deployment of AI — deserves a space for democratic dialogue and consultation between policy makers and all relevant stakeholders. Since AI systems will impact economic performance, public investment decisions and the labour market, involving social partners is essential. Up until this point, we observe substantial differences in the way consultations took place. Trade unions are far less involved than business and employer organisations — if at all. Consultation also needs to take place in sectors and at firm level — workers are increasingly aware when left out of decisions to implement AI systems without prior notice (see Marriot strikes as an example). A survey commissioned by the European Trade Union Confederation indicates: “Only 23% of workers reported that the introduction of new technologies that have the potential to monitor performance and behaviour or data protection issues had been addressed by information and consultation at company level so far”.

The 2019 OECD Employment Outlook provides several examples of company level bargaining and the role of works councils in shaping the digital transformation: e.g. on the right to disconnect and the treatment of workers’ data (France, Spain) or working time (Germany). The 2019 OECD report on ‘Artificial Intelligence in Society’ stresses that:

“[It is] recommended for “stakeholders to work together on complementary AI systems and their co-creation in the workplace” (EESC, 2017). Workplaces also need flexibility, while safeguarding workers’ autonomy and job quality, including the sharing of profits. The recent collective agreement between the German sector union IG Metall and employers gives an economic case for variable working times. It shows that, depending on organisational and personal (care) needs in the new world of work, employers and unions can reach agreements without revising legal employment protections.”

Finally, there are AI applications that might affect freedom of association. There are programmes able to predict future industrial actions and strikes. An Austrian company advertises its labour relation software by — amongst other — referring to its “Large scale data collection and geo-specific targeting […] to deliver accurate early warning of labour unrest well in advance”. Workplace democracy and social dialogue might be stifled if such methods become mainstream.

A check-list for AI @ Work

In the end, the discussion on “AI @Work” is about ensuring job quality for workers through transparent and responsible steering. These organisational aspects of AI matter in at least four domains:

  1. The extent to which the monitoring of workers and the ownership, collection and repurposing of their data is carried out and regulated;
  2. The misuse and bias risks associated with algorithms (e.g. price setting that affects wages on online platforms, hiring and firing decisions)
  3. The uncertainty over liability and security standards for automated and human-to-machine systems affecting workers’ health and safety (industrial robotics, semi-autonomous transport)
  4. The impact of automation, algorithmic decision making and digitally enhanced processes on organisational changes.

Some of it could be addressed in oversight bodies, some of it through multi-stakeholder dialogue that has substance. When it comes to workplace issues in particular, information and consultation mechanisms have to be strengthened and likely new agreements between businesses and worker organisations made. To achieve fair outcomes for workers, their representatives have to be clear about:

‒ What on they are seeking information and consultation rights

‒ What to include into new rounds of social dialogue and potentially collective bargaining.

A checklist for AI@ Work could then take shape along the following chart.

--

--

TUAC OECD
Workers Voice @ OECD

The Trade Union Advisory Committee (TUAC) is an international trade union organisation which has consultative status with the OECD and its various committees.