Author Elle Farrell-Kingsley On The Future Of Artificial Intelligence

David Leichner, CMO at Cybellum
Authority Magazine
Published in
16 min readJan 15, 2025

AI creates possibilities, some that we can barely imagine today! It’s what I imagine the dawn of the internet to be like all over again, offering the potential to solve complex global challenges — from eradicating diseases to addressing climate change — through innovative applications of data and intelligence.

As part of our series about the future of Artificial Intelligence, I had the pleasure of interviewing Elle Farrell-Kingsley.

Elle Farrell-Kingsley is an award-winning futurist, AI ethicist, and bestselling sci-fi author recognised among the 100 Brilliant Women in AI Ethics™ and TechWomen100 honourees. She’s advised on transformative strategies towards AI ethics, legal tech, and global policy, she has advised organisations like the UK Parliament, the Commonwealth, and the European Commission as well as having worked directly on generative and conversational AI for Big Tech, while also delivering educational workshops on AI, privacy and data.

Thank you so much for joining us in this interview series! Can you share with us the ‘backstory” of how you decided to pursue this career path in AI?

I began my career as a journalist, inspired by stories like Carole Cadwalladr’s groundbreaking work on Cambridge Analytica, which revealed the impact technology can have on our lives, culture, and politics. Drawn to technology stories and living a travel-heavy life, flying out to see various innovations and R&D developments, I specialised in tech reporting, serving as a bridge to make complex ideas accessible while amplifying expert insights. I became passionate about helping everyone understand how emerging technologies shape our world.

This recognition for my tech analysis and a growing interest in cyberlaw led me to Big Tech, where I trained large language models (LLMs), generative and conversational AI as a dialogue writer. LLMs are advanced AI systems trained on vast datasets to understand and generate human-like text, generative AI creates new content such as text or images, and conversational AI focuses on simulating human-like interactions through dialogue. My focus was on humanising AI — creating empathetic and relatable responses. This role provided unparalleled insights into AI’s technical intricacies and societal implications.

Simultaneously, my work has led me to policy and advisory roles, contributing to international initiatives and sharing my expertise at conferences and roundtables. Participating in the SAPEA Strategic Foresight Workshop on the Future Uptake of AI in Europe, I contributed to discussions that provided governance insights and options for the EU to consider in the AI Act.

I aim to inspire critical thinking about technology’s role in shaping equitable futures by presenting and engaging with diverse audiences. Similarly, I also channel these experiences into speculative fiction, exploring humanity’s relationship with technology. My sci-fi, often blending dystopian/utopian themes, challenges readers to imagine potential futures and reflect on the challenges and opportunities AI presents.

Today, I integrate my interdisciplinary expertise in journalism, technology, policy, and storytelling to take a strategic, future-focused approach to AI. My mission is to envision and help shape a world where AI benefits everyone, ensuring its complexities are understood, its potential is harnessed responsibly, and its societal impact is equitable.

What lessons can others learn from your story?

One lesson from my journey is the importance of carving out your own path. Growing up, I struggled with the question, “What do you want to be when you grow up?” because I never felt there was a singular answer. Coming from an arts background and not directly from STEM, I had to navigate a unique journey, creating roles that didn’t yet exist. By following my passions — whether in journalism, AI, policy, or storytelling — I’ve shaped a career that’s uniquely my own. It’s a reminder that you often create the most exciting opportunities for yourself (and to also wonder how many incredible roles at still yet to come!)

Success often comes from staying true to your vision and embracing the unique journey that defines your expertise. For me, this meant venturing into uncharted territories, from tech journalism to AI policy, to craft a career that reflects my passion and impact.

Can you tell our readers about the most interesting projects you are working on now?

I’m currently working on several interesting projects. One of them is a retro arcade game inspired by my bestselling story, The Last Garden. This project not only lets me explore storytelling in a new, interactive medium but also allows me to delve deeper into the narrative world I’ve created. The interactive medium allows players to explore moral dilemmas tied to the advancement of tech, offering an immersive way to understand complex concepts.

I’m also writing a full-length novel that explores our predicted future tech, blending science fiction with speculative fiction, which is a genre that explores imaginative scenarios grounded in possible realities, often addressing societal questions through futuristic lenses to imagine a world shaped by today’s emerging technologies. The beauty of this genre is taking all of our current issues, policies and developments and exploring “what if?”.

In addition to my creative work, I’m deeply involved in research on AI, focusing on safety and ethical concerns related to data collection. For the workshops on AI safety and responsible AI, I collaborated with educators and industry experts to create engaging content that resonates with young learners. This includes developing AI safety and responsible AI workshops to train young people in understanding and navigating these critical issues. These workshops aim to empower the next generation to engage with AI responsibly and ethically. Similarly, I’ll be working with ULTRA (University of Law’s Technology Research Academy) on some exciting developments in the new year.

I’ve also been delving into academic research on cross-border data ethics and the role of speculative fiction in policy foresight. For instance, my recent work explores how storytelling can model the societal impacts of AI, serving as a bridge between technical concepts and human-centred policymaking. This intersection of academia and creative practice highlights the importance of interdisciplinary approaches in shaping ethical technology.

Most of all, I’m excited about the open possibilities my freelancing career offers in the tech industry. The fast pace means you never quite know what opportunities are around the corner, and I find that incredibly stimulating. This constant change and innovation keep my work fresh and interesting, allowing me to continue exploring new frontiers in AI, storytelling, and beyond.

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?

Just one? No, definitely not! There are so many people I am forever grateful to for their support and guidance. My undergraduate module leader, Simon Philo, was the first real educator to encourage my incessant “why’s,” challenging me to think deeply, question things, and embrace my out-of-the-box thinking — traits that have shaped who I am and the work I do today.

Similarly, Cat Zuzarte Tully and Finn Strivens at the School of International Futures were instrumental in nurturing my strategic foresight approaches. They provided the space to explore big ideas and encouraged me to think long-term about complex issues. Then there’s the team at Voice Magazine — Tom, Emrys, and Diana — who gave me my first big break in journalism. They saw something in my writing and gave me the platform to share my work with the world. Lastly, there’s also Youth For Privacy who have helped me workshops reached participants all across the world.

There are also those in my personal life who have always believed in me, supported me through the ups and downs of being an artist and freelancer, and encouraged my storytelling journey.

Lastly, my readers — my loyal supporters — continue to read my work and give it life, leading to my work reaching #2 in new releases and #30 in the Amazon Best Seller list. Their encouragement and engagement inspire me and drive me to continue creating meaningful stories.

What are the 5 things that most excite you about the AI industry? Why?

  1. Future opportunities

AI creates possibilities, some that we can barely imagine today! It’s what I imagine the dawn of the internet to be like all over again, offering the potential to solve complex global challenges — from eradicating diseases to addressing climate change — through innovative applications of data and intelligence.

2. A second Enlightenment period

At the risk of sounding overly optimistic, we might be entering a “Second Enlightenment,” where AI democratises access to knowledge and creativity, empowering humanity to focus on innovation, higher-order thinking, and cultural evolution. I believe this will redefine how we live, learn, and collaborate globally.

3. Transformative tech

Integrating AI with emerging fields like quantum computing promises breakthroughs in computation, enabling advancements in materials science, medicine, and even space exploration at a previously unimaginable pace. Space exploration is something I hope to see in my lifetime.

4. Future jobs

While automation is a concern, it’s also inspiring entirely new industries and career paths. Perhaps your future job doesn’t even exist yet — and imagine how creative those opportunities may be). However, this also comes with risks, which I’ll discuss in a moment!

5. Cultural diplomacy

AI’s ability to break language barriers, preserve endangered languages, and create new forms of expression can strengthen cultural exchange and diplomacy. I would love to see it used to enhance understanding and collaboration across borders!

What are the 5 things that concern you about the AI industry? Why?

  1. Deepfakes and synthetic media

AI-generated content like deepfakes and synthetic media threatens our ability to trust what we see and hear. Not just this, but I suspect that constant questioning of reality, “Is this real?” as we’re being advised to do, will lead to paranoia within society. This erosion of truth could destabilise societies, further fuel misinformation, and undermine public discourse, making it harder to discern reality from fabrication.

2. Hyperreality and loss of authenticity

In a time where there’s a significant disconnect between online and real life, the growing presence of AI in generating content risks creating a hyperreal society where AI-generated simulations feel more “real” than reality itself. We’ve seen this with AI-generated text, videos (deepfakes) and even beauty filters. This could distort cultural narratives, blur the line between truth and fiction, while also weakening human creativity and authenticity in the process.

3. Lowered data privacy and exploitation

The vast data requirements of AI systems raise significant privacy concerns. Data is often collected without clear consent or transparency, pushing the boundaries of what individuals are willing — or forced — to share. We saw this with Cambridge Analytica and how much data can be harvested from seemingly harmless quizzes or social media.

What’s more, this erosion of privacy can lead to surveillance, exploitation, and reduced personal autonomy.

It’s particularly concerning as people increasingly share personal views, thoughts, and beliefs with AI chatbots. If we’re already wary of how much search engines know about us, I think it’s even more important to consider the even greater data collection happening when users interact with AI for job hunting, resume assistance, relationship advice, health inquiries, and more.

Many users treat conversational AI as a private, encrypted and secure space, but in reality, it is being used to build detailed profiles of all of its users.

4. Job displacement and digital divide

Automation fuelled by AI threatens to displace millions of jobs, particularly in industries reliant on repetitive tasks. Without serious reskilling efforts or policy intervention, this could widen economic disparities and concentrate power in the hands of those who control AI technologies.

There’s also the risk of widening the already existing digital divide. Ultimately, it’s imperative that we, as global citizens, ensure that AI benefits are accessible to all, regardless of geographic or socioeconomic status. This will be critical to fostering global equity and preventing further systemic exclusion.

5. Predictive algorithms

There’s also the growing reliance on predictive algorithms. While they can offer valuable insights like suggested shopping habits or suggested television shows to watch, these algorithms often reinforce existing biases. We’re now seeing these algorithms used to make decisions that impact hiring, healthcare, law enforcement, and more. For instance, biased algorithms in recruitment systems have disproportionately screened out qualified female candidates in STEM fields, highlighting the need for rigorous oversight and diverse training data.

Furthermore, a 2016 ProPublica investigation found that the COMPAS algorithm, used in the US criminal justice system, was more likely to falsely label Black defendants as higher risk compared to white defendants.

Similarly, a 2021 study by the World Economic Forum revealed that AI models used in healthcare risk assessments exhibited racial biases, potentially leading to unequal treatment.

If we continue to rely on AI predictive algorithms to make decisions on our behalf and determine our lifestyle, then we could find ourselves in a strange dystopian future. This is an area that I’m also researching.

As you know, there is an ongoing debate between prominent scientists, (personified as a debate between Elon Musk and Mark Zuckerberg,) about whether advanced AI poses an existential danger to humanity. What is your position about this?

I think this debate is fascinating because it reflects deeper concerns about how we as a society approach the development and deployment of such technologies. I’m rooted in a future foresight approach, an interdisciplinary methodology exploring possible, probable, and preferable futures to help inform present-day decision-making. Future foresight emphasises proactively identifying potential scenarios and their implications, ensuring we address risks while maximising opportunities. From this perspective, I view AI as fundamentally neutral. AI is not inherently dangerous but rather a reflection of the data we feed it, the systems we design, and the objectives we set.

As such, the real risk lies in our lack of forethought and ethical consideration when training and deploying AI. If we train these systems with harmful biases, prioritise profit over safety, or fail to anticipate unintended consequences, we could indeed create systems that threaten humanity.

From this perspective, the dangers of AI lie not in the technology itself but in how we design, train, and govern it. John Stuart Mill’s harm principle is particularly relevant here: the actions of individuals (or, by extension, systems) should only be limited to prevent harm to others. When we develop AI, we must ingrain this principle deeply within its design. However, even when safeguards like these are in place, human decisions about implementing or interpreting them can lead to unintended consequences.

Indeed, Isaac Asimov’s I, Robot is a powerful example. In the story, robots are programmed with the Three Laws of Robotics to prevent harm to humans. Yet, those laws — intended to protect — become the source of conflict as the robots interpret them in ways humans fail to anticipate. Although I, Robot is fiction, I believe speculative fiction is an incredibly valuable tool for exploring potential futures and practising responses to complex scenarios. It allows us to imagine what could go wrong and reflect on how to prevent those outcomes.

Tackling these challenges requires collaboration across disciplines — engineers, policymakers, ethicists, and social scientists must work together to address AI’s ongoing risks. For instance, I’ve worked alongside policymakers and technologists to integrate ethical foresight into legislative frameworks. The real challenges of AI lie not just in the technology but in the governance frameworks we create. This is why my work focuses on embedding ethical principles at every stage of AI’s lifecycle, from training, design, and deployment.

What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?

First and foremost, ethical AI development must be at the core of all efforts. This involves embedding human values into AI systems, ensuring they reflect diversity and inclusivity, and prioritising societal well-being over profit. Careful consideration of training data and system design can help mitigate biases and align AI technologies with shared human principles.

Secondly, public policy and regulation will play a critical part. Governments and organisations must enact comprehensive and enforceable laws to ensure accountability, safety, and ethical usage. For example, the European Union’s AI Act — the world’s first overarching AI legislation — proactively addresses AI risks and sets a global standard. Similarly, we’re now also seeing South Korea’s AI Basic Act, which combines multiple proposals into a cohesive framework emphasising risk-based regulation. These examples showcase how powerful legislation can foster innovation while safeguarding public interests, offering a blueprint towards global cooperation.

Thirdly, transparency and education are equally important in building public trust. With predictions that AI will generate 95% of internet content by 2026, I think this highlights the urgent and very immediate need to equip the public with the knowledge to understand and critically engage with AI systems. It can be increasingly difficult for consumers to understand what’s AI-generated and what isn’t, especially in the realm of synthetic media and deepfakes. Consequently, open communication about how AI operates and makes decisions will significantly help demystify the technology and alleviate fears.

Lastly, speculative fiction offers a powerful lens through which we can all explore the future of AI. Presenting imaginative scenarios allows society to grapple with and analyse potential risks and opportunities, encouraging foresight and innovation. Through storytelling, we can anticipate challenges, inspire dialogue, and design systems that avoid pitfalls.

As you know, there are not that many women in your industry. Can you advise what is needed to engage more women into the AI industry?

To bring more women into AI, we need accessible education, mentorship, and visibility. Representation inspires participation; when women see others succeeding in AI, they can envision a place for themselves. Something that I’m seeing more of, is that this extends beyond technical roles to include interdisciplinary contributions from fields like ethics, psychology, law, and the arts, which are essential to the future of AI. AI development is no longer simply coding roles.

AI reflects the society that creates it. We can build a future where AI serves everyone by including women’s representation across all sectors and disciplines. Equitably, responsibly, and innovatively. Ultimately, this would lead to not just for inclusion but for better outcomes for all of society.

What is your favorite “Life Lesson Quote”? Can you share a story of how that had relevance to your own life?

“Fortune Favours the Bold”

This idea has pushed me forward, guiding me to take bold steps toward my goals, even when the path was uncertain. For example, I moved to China for a year, which was one of my major leaps, leaving behind everything familiar to embrace a new culture and opportunities. Arriving in Beijing, I was both exhilarated and overwhelmed. Navigating language barriers and cultural nuances taught me resilience and adaptability, lessons that continue to shape my approach to challenges in AI policy today. It was a risk, but it transformed my perspective and taught me to trust my instincts.

Similarly, I also left a perfectly good job in tech journalism to dive into big tech, knowing it was the best way to explore cutting-edge AI and emerging technologies. That decision gave me firsthand experience shaping the tools and systems I’d long written about. It also gave me the confidence to pursue my deeply personal dream of writing my book and then putting it out there in the world. Sharing my writing with the world required its kind of boldness, but it was a step I knew I had to take to make use of my creative ambitions.

Each choice taught me that success often lies on the other side of uncertainty.

How have you used your success to bring goodness to the world? Can you share a story?

Through my involvement in the School of International Futures’ (SOIF) National Strategy for the Next Generations (NSxNG) programme, I have actively contributed to shaping policies that consider the long-term impacts of emerging technologies on future generations. This initiative emphasises the importance of intergenerational dialogue in policy-making, ensuring that the voices of young people are integral to strategic decisions.

In this capacity, I hosted roundtables and put forward the views of intergenerational participants, providing insights into the ethical considerations of AI and emerging technologies and advocating for policies that promote inclusivity and sustainability. My contributions have been recognised in various forums, including Commonwealth Parliamentary discussions, the European Commission and the UK Parliament, where I have shared perspectives on improving government strategic thinking for young people, resulting in published evidence in Parliament, advocating for a committee of the future. I hope this inspires others that you can have a seat at the table too, you just need to use your voice!

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

If I could start a movement to bring the most good to the greatest number of people, it would focus on global digital inclusivity and ethical technology development. This initiative would ensure that everyone, regardless of background or location, has access to the tools and knowledge to engage with emerging technologies like AI, cybersecurity, and data privacy.

At its core, the movement would work to bridge the digital divide by providing universal internet access, ensuring no country or community is left behind in the digital age. It would also emphasise digital literacy, integrating education about responsible technology use into curricula worldwide, so people understand how to use technology and its ethical and societal implications.

If I think even further about this, perhaps this movement could advocate for responsible innovation, where technologists, businesses, and governments are held accountable for creating technologies prioritising fairness, privacy, and human dignity.

The result of this could be a global digital Bill of Rights to truly protect users’ data while mitigating the digital divide, providing everybody equal digital rights online. It wouldn’t be something as contentious as the UK’s Online Safety Act but instead look to protect digital users’ rights with a user-first focus. This vision could build on frameworks like the EU AI Act and the collaborative efforts of policymakers, technologists, and ethicists I’ve worked alongside.

By empowering individuals with knowledge and access and holding creators and tech companies accountable, this movement would ensure technology acts in a way that uplifts everyone. Who knows, perhaps this will have to be my next project — fortune favours the bold, after all!

As I look ahead, I’m focused on expanding the ethical frameworks guiding AI development. Whether through academic research, speculative fiction, or advisory roles, my goal is to ensure emerging technologies align with values of equity, inclusivity, and human dignity.

How can our readers further follow your work online?

I’m here, there, and everywhere!

This was very inspiring. Thank you so much for joining us!

About The Interviewer: David Leichner is a veteran of the Israeli high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications. At Cybellum, a leading provider of Product Security Lifecycle Management, David is responsible for creating and executing the marketing strategy and managing the global marketing team that forms the foundation for Cybellum’s product and market penetration. Prior to Cybellum, David was CMO at SQream and VP Sales and Marketing at endpoint protection vendor, Cynet. David is the Chairman of the Friends of Israel and Member of the Board of Trustees of the Jerusalem Technology College. He holds a BA in Information Systems Management and an MBA in International Business from the City University of New York.

--

--

Authority Magazine
Authority Magazine

Published in Authority Magazine

In-depth Interviews with Authorities in Business, Pop Culture, Wellness, Social Impact, and Tech. We use interviews to draw out stories that are both empowering and actionable.

David Leichner, CMO at Cybellum
David Leichner, CMO at Cybellum

Written by David Leichner, CMO at Cybellum

David Leichner is a veteran of the high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications

No responses yet