Will our Jobs Soon Be Replaced By AI?

Claudia Schulz
Thomson Reuters Labs
7 min readAug 12, 2024

Insights from the Oxford Future of Professionals Roundtable

The Oxford Future of Professionals Roundtable

I recently had the privilege to be an invited panelist at the Oxford Future of Professionals (OxFOP) Roundtable at the University of Oxford.

The goal of the Roundtable with 70 invited participants from industry, academia, and regulatory organizations was to engage in evidence-based discussions on the impact of AI on the future of professional work, and its links to corporate strategy, entrepreneurial opportunities, and regulatory options, and to identify future areas of research. The day was organized around three panels on the impact of AI across occupations and industries, a deep dive into specific professions, and AI governance and regulation.

Throughout the day, it became clear that we are standing at the beginning of a new professional era, similar to that of the introduction of the computer or the internet. With AI set to save professionals up to 12 hours a week and every professional having an AI assistant to rely on within the next five years, the future of work is being rewritten before our eyes. But what does this mean for job security, skill development, and the very nature of professional expertise?

Keep reading to learn what experts had to say about these questions! If you are curious about my own contribution to the Roundtable as a panelist of the “Deep Dive into Professions” session, have a look at this blog post.

The Future of Professionals Report

Steve Hasker, CEO of Thomson Reuters, started by sharing key takeaways from the Future of Professionals Report published by Thomson Reuters:

  1. 77% of professionals expect AI to be transformative for their work.
  2. Professionals anticipate that AI will save them 4 hours a week over the next year and 12 hours a week in the next five years.
  3. Based on the survey results, Thomson Reuters predicts that within five years every professional will be using an AI assistant.

Thomson Reuters already leverages AI in its products, with CoCounsel being the prime example: an AI assistant that helps legal as well as tax and accounting professionals in finding and analyzing information, and which can already save 50–70% of time for some professional tasks.

Maybe surprisingly, the survey found that only 10% of professionals have the widespread loss of jobs as their biggest fear — a topic that came up again throughout the day. Among fears associated with AI, job loss thus ranks only fourth, with the three bigger fears being over-reliance on technology at the expense of skill development, AI being used for malicious purposes, and privacy concerns. Steve also highlighted that change management would be crucial both to fight fears associated with AI and to make the most of the new technology.

He closed his keynote with what I thought an excellent thought: that we currently have the “worst language models we will ever work with” — further developments and improvements are to be expected on the generative AI front in general and in Thomson Reuters AI products in particular.

AI Impact Is Correlated with “White Collar” Jobs

Exposure to AI

Robert Seamans from NYU compared the transformational power of AI to that of computers or the internet. He introduced a study measuring the expected exposure of different professions to AI based on the skills required for each profession: salary, educational level, as well as creativity of a profession are all found to be positively correlated with AI exposure. Based on these results, Robert concluded that “white-collar” jobs will have the highest exposure to AI.

Correlation between AI Exposure and Occupational Characteristics published here

This was echoed by Ekaterina Prytkova from the University of Sussex, who shared research showing that high AI exposure is correlated with occupations requiring a high to medium skill level. She highlighted that AI will change the relationship between humans and information and the resulting importance of human-friendly AI interfaces and training.

Impact of AI Exposure

Stijn Broecke from OECD shared research showing that AI exposure is associated with employment growth rather than the often-discussed loss of jobs, confirming the findings of the Future of Professionals report presented by Steve Hasker. Interestingly the fear of losing the job as a consequence of AI is low for professionals in manufacturing since a lot of automation has already occurred in this profession in the past. In contrast, the fear is much higher in finance, which has not seen much automation so far. Stijn also noted that workers with disabilities are expected to highly benefit from AI, whereas older and more junior workers are generally at a higher risk of losing their job because of AI.

Carl-Benedikt Frey from the Oxford Internet Institute noted that the immediate risk of job or task replacement by AI is low and that instead he expects an intermediate step of task simplification through AI. He also inspired a thought experiment: imagine that all tasks the ancient Greeks performed had suddenly been automated back then — what would the world look like now? In fact, many of their day-to-day tasks have now been automated, and new tasks and associated jobs have been created as a consequence. His point being that even if some of our current tasks and maybe even jobs get automated by AI, new tasks and jobs will arise. One area that he sees as prominent for new tasks and those less likely to be replaced is social intelligence.

The Importance of Human Decisions

Emily Jefferis from KPMG warned the audience that humans still bear the responsibility for any decision they make and should not blindly trust and rely on AI. I re-iterated this important statement myself when asked whether, given the ease of using Generative AI, professionals now think they are AI experts and how that affects the role of an AI scientist:

In the past, I often had to convince professionals that AI was valuable and could be trusted; now my role is almost flipped and I sometimes have to warn them to not blindly trust Generative AI.

I also advocated an AI solution development approach, where AI scientists and SMEs (subject matter experts) closely collaborate at every step of the AI solution development and highlighted the advantages of a human-in-the-loop approach. To find out more, have a look at this blog post.

Governance of AI Use

Yuni Wen from the University of Oxford, emphasized the importance of studying AI public failure cases. Yuni categorized these failures into two types: functional failures — when AI systems fail to perform their intended function (e.g., an autonomous car causing a fatal accident) — and ethical failures — when AI systems’ behavior conflicts with social norms (e.g., exhibiting bias towards certain ethnic groups). Yuni also introduced CapAI, an ethics-based auditing tool designed to help organizations respond to AI failures effectively.

Michael Impink from Paris University presented a survey of AI governing practices in startups, many of which lack the regulatory and governance teams that larger companies have. Based on over 1,500 responses, the survey found that startup-led governance can be categorized into traditional (e.g., audit boards) and technical (e.g., A/B testing and human-in-the-loop) approaches and that 57% of startups use human-in-the-loop approaches for governance.

Ben Garfinkel discussed the risks posed by “Frontier AI”, i.e., highly capable, general-purpose AI systems like ChatGPT. On the one hand, Frontier AI can only be created by a small number of companies that have the necessary resources, which is a risk on its own. On the other, general purpose models can be misused for harmful purposes and are subject to post-manipulation such as deepfakes and academic cheating. Ben also discussed the role of governments in AI governance: they can enforce or encourage companies to commit to safety frameworks and provide guidance on best practices.

Conclusion

As the official program drew to a close, many discussions continued — a testament to a day full of stimulating thoughts, but also open questions.

What struck me most was the palpable sense of both excitement and responsibility during each of the sessions as well as every single conversation I had. We stand at a pivotal moment in history, where our decisions and actions will shape not just our own careers, but the very nature of professional work for generations to come. The discussions reinforced my belief that the most powerful AI solutions will emerge not from siloed development, but from a true partnership between those who understand the technology and those who know the problems that need solving. It is clear to me that part of this collaboration needs to be a consideration on the larger-scale impact of the developed AI solutions as well as on potential risks and governance.

Moving forward, I’m inspired to continue advocating for this collaborative approach, and I’m eager to see how the insights from this roundtable and the Future of Professionals Report will ripple out into the wider professional world.

I want to say a huge thank you to Prof. Mari Sako and her team for the tremendous organization of this engaging and insightful event and I look forward to continuing discussions and follow-up events.

The Oxford Future of Professionals (OxFOP) Roundtable took place on 26th June 2024 at the Saïd Business School, University of Oxford, and was sponsored by Thomson Reuters.

--

--

Claudia Schulz
Thomson Reuters Labs

AI Scientist and Software Engineer | NLP, KR, ML, Data Science