People-first AI: rewards, risks and the question of ethics

Magnetic
Magnetic Notes
Published in
4 min readAug 22, 2023

Generative AI is the biggest technological step forward since the internet. It’s transforming the business landscape with new tools, products and ways of working. It’s created a tidal wave of technological innovation and investment, across sectors. For those of us invested in innovation, human progress and creating better futures, the possibilities are exciting.

We’re at a point of mass opportunity and adoption. Nearly all UK executives (97%) say the convergence of digital and physical worlds will transform their industry and 70% are actively exploring generative AI. More than half are upskilling their teams in it.

The business case is clear. But as we’ve scrambled to keep up with generative AI and capitalise on it, how much is that because we’ve had to? AI FOMO is real. Is it our new best friend or, aware of the issues with it, are we keeping the enemy close? A bit of both?

We’re seeing its value but also some big dark holes that we could fall down. Security. Privacy. Bias. Copyright. Accountability. Transparency. Ethics.

Rewards, risks and start by starting

Identifying the use cases for AI is not usually the problem. There’s no shortage, including automation, product design, coding, content generation, customer engagement, data analysis or process optimisation. With so many possibilities, the challenge is understanding which ones will help solve the biggest business challenges and where to invest in AI to help achieve goals.

Almost all business leaders believe generative AI will be essential to their strategy. But where to start?

Look at AI’s value potential in the context of the business and the industry. AI needs to be a true differentiator and enabler: from productivity to customer experience and business resilience — not just technology in search of a use case.

It’s not about creating a new AI strategy either, it’s about thinking of AI as another tool to solve existing problems. Then leaders must think about how to accelerate their business strategies to improve performance. There’s a sense that leaders are wary of repeating previous mistakes from digital transformations. Wary of getting caught up in strict experimentation rather than execution.

The key is not to be paralysed by it. Focussing on what you’re already doing and supercharging it so it goes to the next level, at speed. This could be prioritising investment in technology, building talent teams with the skills to address today and future challenges — and having the foresight to continually re-evaluate strategy. Or it could be making sure a business’ C-Suite is tech-fluent and tech isn’t just siloed to the CTO and tech teams.

While the potential of AI is boundless, its application without ethical constraints can lead to detrimental outcomes. Businesses need to notice and be aware of those dark holes, such as operational, regulatory, competency, customer privacy, data handling, data quality, transparency, accountability and bias. When it comes to developing and implementing AI tech into a business, it is vital to first create a robust ethical framework around it.

How might we assess and mitigate such risks, so that growth is sustainable and we keep our customers’ and employees’ trust? Or, as we try to maximise the opportunities and efficiencies in AI, is risk inherent — and how much risk is acceptable?

Putting people at the heart of AI

“We must design for the way people behave, not for how we would wish them to behave.”

– Don Norman, Director of the Design Lab at the University of California

AI is here to serve us, not the other way around. It doesn’t always seem like that, but far more important (and harder) than understanding technology is understanding each other. We have to stay people centred. AI can innovate, calculate, generate (and hallucinate!) but only people can connect with each other at a human heart and feelings level.

The moral and ethical questions are increasingly revealing themselves. Algorithmic bias and decision making done by AI can create inequalities and has the potential to cause harm. Facial recognition AI can lead to breaches of privacy. How do we use it in a way that’s proactively ethical?

We know there needs to be an ethical framework around it (in the business context and wider) but to truly realise the benefits of AI, people need to be at the heart to make it truly ethical. And the voices need to be multidisciplinary — as explained by Magnetic CEO Jenny Burns:

“To truly realise the benefits of AI, a multidisciplinary approach is essential. This involves tech developers, business leaders, ethicists, social scientists, and the general public collaborating to shape the AI narrative. When these diverse stakeholders come together, the AI solutions produced are more holistic, ethically sound, and aligned with societal needs.”

At Magnetic, we believe that putting people first (whether it’s your customers, partners, employees or the wider societal good) is the foundation of a sustainable, successful, purpose-led business. So how do we design and use AI with human connection, understanding and empathy at the heart of it?

Useful reading:

The 14 people who matter in UK AI policy

The Pentagon just launched a generative AI task force

The world needs a new Turing Test

AI is making companies even more thirsty for your data

AI safety summit; £13m to boost use of AI in healthcare

Join the conversation

Join our Exchange discussion on 13th September from 8.30–9.30am, where we bring together business leaders from our network to explore these topics. As we all walk down this new path of AI together, it’s a chance to share thoughts, learnings and questions with each other.

Magnetic is a design and innovation company that helps design better futures. We’ve worked with global businesses to build capabilities, products, services and transform organisations. To find out more, get in touch: hello@wearemagnetic.com.

--

--