Three Questions you need to ask before you use AI in your organisation

Audrey Lobo-Pulo
Phoensight
Published in
7 min readNov 30, 2020
Photo by Darlene Alderson from Pexels

The emergence of new generations of Artificial Intelligence (AI) algorithms, (such as GPT 3), is creating an urgency in addressing the socio-technological issues around AI. Along with all the hype comes unprecedented challenges on how organisations can manage the repercussions and risks of deploying AI technologies — and who is accountable when things go wrong…

In fact, a recent survey of IT and Business Executives across nine countries (Australia, Canada, China, France, Germany, Japan, the Netherlands, the United Kingdom and the United States) indicates that 56 per cent of organisations are slowing down their adoption of AI technologies due to concerns around emerging risks and negative public perception.

While some are ‘finely tuning’ AI algorithms for ‘near-perfect performance’ in testing environments, the recent example of the gender bias with Apple’s credit card rating likens their deployment in society to a bull in a china shop! Not to be taken lightly when so much is at stake, AI presents broad and complex challenges that are unprecedented compared with previous technologies.

Organisations are caught between a rock and a hard place — do they play it safe and forego the touted “economic growth and competitiveness”? Or do they take a calculated risk that could potentially damage their reputation and trustworthiness — not to mention any unforeseen collateral damage?

Photo by Roland Kay-Smith on Unsplash

The recent ‘State of AI 2020’ report notes an alarming and rapidly approaching growth in the computational, economic and environmental costs for ‘incrementally smaller improvements in model performance’. At a time when organisations are committing to transitioning to a ‘net zero carbon emissions’ future, questions around governance — when and how to use AI — will make their way into the boardroom.

Meanwhile, various AI Ethics principles and guidelines are being developed with the intention of restraining the potential harms wrought onto society — but the research indicates that these have little impact on human decision-making. Calls for auditing frameworks, more algorithmic transparency and explainability, while incredibly important, don’t address the hidden social and ecological harms — most of these aim to flag or manage the manifestations of AI harms that can be detected rather than those that are imperceptible.

In amongst all these complexities, board members, senior leaders and chief data officers are expected to foresee the potential risks and socio-technological vulnerabilities for better decision-making. With so much uncertainty, it’s no wonder that many are pressing governments for AI regulation so that some line may be drawn in the sand.

The task is daunting, but there are few things we can do at the outset. First, we need to take a step back — away from fixing the plumbing in the AI pipeline — to survey the changing terrains and navigate our way forward by asking three questions:

1. How are our Perceptions changing?

Photo by Ravi Kant from Pexels

“Perception is crucial to understanding. How you see, and what you see, determine how you will be… ”

— John O’Donohue

The fundamental quest of big data and AI is the discovery of new insights into existing organisational challenges and the many issues we face today. Yet, how these issues and challenges are framed largely depends on how we perceive them — and this ‘framing’ extends to the data we collect and use, which in turn shapes the AI algorithms that are employed.

Our perception of the problems and the information we use to better understand them are crucial in setting up our strategy (AI or otherwise) forward. While AI algorithms can be incredibly useful in some instances, they suffer from an inability to adapt to multiple contexts in the way humans can — Why? it’s simply because the data they use is de-contextualised.

Now, this can be both an advantage and a disadvantage — in cases where the information is ‘objective’ (or has little variation with changing contexts e.g. normal human body temperature) AI and other computational processes are less likely to ‘go off the rails’, but these powerful tools can prove to be a huge liability when using ‘subjective’ information where context matters!

And we’ve seen examples of this in society — discriminatory algorithms for job screening, falsely accused criminal charges, and increased inequality for credit ratings are just a few amongst many. But public debate around these issues still mainly focuses on data biases, and whether or not it’s possible for these to be mitigated by ‘curating’ the data or redesigning AI algorithms.

What’s often lacking is a deeper understanding of the significance of the many missing contexts surrounding the data — and a realisation that while some data can appear to be ‘objective’, systemic biases within society taint this information, and has the potential to perpetuate or exacerbate the problem.

So we need to start by asking new questions.

What sort of data are we collecting? Whose perception does this information represent? What are the contexts that are missing? How does this change our perceptions? How do we interpret any data-driven insights across multiple contexts?

2. Where is the Knowledge?

Photo by Francesco Ungaro from Pexels

“Warm Data is not meant to replace or in any way diminish other data, but rather it is meant to keep data of certain sorts “warm” — with a nest of relations intact.

— Nora Bateson

In a competitive market where the knowledge economy is fuelled by a race for information that allows organisations to become more productive and efficient, the use of AI algorithms is typically seen as a sign of technological progressivity. As sophisticated AI algorithms, such as GPT-3, become more costly to train and run, the market dynamics between small businesses and larger companies are also expected to change.

While it may appear that the funding gap disadvantages the smaller players, what’s been overlooked is the type of knowledge and information that’s available. Organisations have long known that ‘tacit knowledge’, though difficult to quantify, can play an important role in understanding business and client needs. The challenge has been in finding ways to better harness and utilise these ‘human assets’.

Once again, a deeper comprehension of the role of ‘inter-relationality’ and the many contexts involved could unearth new knowledge that has the potential to rival scalable AI technologies by applying localised and nuanced insights for a competitive advantage. Digital transformations that are careful in how they value these human assets, particularly as to the contextualising of information, have the potential to ‘leap-frog’ more blunt approaches with better outcomes.

Some questions to consider might be:

Where is the critical knowledge we need? What insights and learnings can be gained by examining the various inter-relationships within, across and connected to our organisation? How do these relate to more traditional knowledge sources?

3. How do we Respond?

Photo by cottonbro from Pexels

The ethics of the system are the intrinsic aim of the system, not an externally imposed restraint on commercial or other outcomes.”

— Worimi man Deen Sanders & Kamillaroi man Rick Shaw

Strong organisational culture and values are critical for anchoring client trust during uncertain times. Along with the recent global issues such as climate change, increased inequality and political polarisation, we are witnessing a shift in public attitude and awareness when it comes to corporate responsibility, accountability and sustainability.

Investors are being forced to diversify their investments away from systemic risks, by considering the environmental and social impacts as part of their fiduciary duty. In the same vein, companies are recognising that employee satisfaction is critical to good performance. In amongst all these shifts come calls for increased AI accountability and responsibility.

As we build for a new future, the social costs of digitisation, as they affect human rights, privacy concerns and the vulnerable, bring to light complex challenges on how to navigate the ethical dilemmas that come with designing AI algorithms. Indeed, tech employees are increasingly taking on the additional burden and moral responsibility in tending to the ethical risks of technology with little support.

At a time when organisational resilience has never been more important, it may be prudent to re-assess whether digital technologies are being used for greater efficiencies at the cost of employee and service vulnerability.

In his work on better understanding systemic resilience, French geophysicist, Xavier Le Pichon, puts forward the idea that tending to the vulnerabilities within a system are important for facilitating its evolution, and for increasing its ability to adapt during times of uncertainty.

Could certain ‘perceived’ inefficiencies actually be advantageous during turbulent times? Could understanding the many contexts and nuances of the complexities we are working within allow us to find new ways of using the technology — without compromising our values or moral responsibilities? How can we ensure a robust corporate governance that supports a healthier workplace culture and ecosystem?

“Human culture is a decentralized evolutionary system… Any predictive model that fails to incorporate this distributed ongoing daily billion-headed microevolution is doomed to collapse…” — Kevin Kelly

This article is based on my position statement at the 5th IEEE UV2020 panel on “New legal, social, and ethical challenges posed by applications of AI and also my presentation, “In the Shallow with AI” at the Big Data Conference Europe 2020.

We will be hosting a series of workshops on how organisations can navigate the complexities presented by AI in early 2021.
Register here for more information.

Phoensight is an international consultancy dedicated to supporting the interrelationships between people, public policy and technology, and is accredited by the International Bateson Institute to conduct Warm Data Labs.

--

--

Audrey Lobo-Pulo
Phoensight

Founder of Phoensight, Public Interest Technologist, Tech Reg, Open Gov & Public Policy geek. Supporting the interrelationships between people, society & tech.