The AI Minefield: Who Owns Your Fake Face and Why Your News Feed Lies to You

Rob Young
ILLUMINATION
Published in
7 min readSep 23, 2023

A reality check on the high-stakes poker game of AI ethics — are you all in or folding?

Photo by Ian Schneider on Unsplash

Ah, yes, artificial intelligence — the Pandora’s box of the 21st century.

On one hand, it promises to revolutionize everything from art to healthcare to transportation.

On the other hand, it’s creating an ethical and legal minefield that would make even a philosopher cringe.

So, what are the most glaring ethical and legal issues plaguing the AI world today? I’m about to take you on a rollercoaster ride through intellectual property ownership quandaries, deepfake debacles, and data bias nightmares.

You might want to keep your hands inside the vehicle at all times; it’s going to be a bumpy ride.

Who Really Owns that AI-Generated Masterpiece? Picasso’s Rolling in His Grave

I like stories, so let’s start with one of those.

Imagine Jane, a digital artist and aspiring entrepreneur who is keen to monetize her unique AI-generated artwork. She uses an AI program that takes her rough sketches and transforms them into something that can only be described as a blend of Salvador Dali and Picasso, with a dash of surrealism for good measure. But when she tries to copyright her works, she hits a legal brick wall.

“You mean to say, the AI could technically own a part of my work?” she asks her lawyer, dumbfounded.

Welcome to the new era of art, where your co-artist is a machine, and the legal lines of ownership are blurrier than a smudged canvas.

Why You Might Not Own Your Own Art

The World Intellectual Property Organization (WIPO) hasn’t explicitly laid out regulations for AI-generated works, but it’s pretty clear that our existing intellectual property (IP) laws are trailing behind the technology.

You’d think with how advanced we’ve become, we’d have an answer to that seemingly straightforward question:

Who owns the output if the input is collaborative between man and machine?

In 2019, a fascinating case emerged where an AI called AIVA was credited with composing music for a video game. This blurred the lines even further.

Could the AI be considered a ‘composer,’ or is it just a tool?

According to existing copyright law, the notion of ‘authorship’ usually implies some level of creativity and intent — qualities we don’t typically attribute to machines. But as AI becomes increasingly sophisticated, this traditional definition is being challenged.

The Shared Ownership Model — Why It Could Work

The Shared Ownership Model isn’t just wild though; it’s what I would consider a necessity we should seriously consider.

In Jane’s case, let’s imagine a framework where she shares ownership rights with the developers of the AI program she uses. The AI developers get a small cut from sales, and Jane retains the majority share because, well, the AI wouldn’t even know what ‘ownership’ is without her human intent driving the creative process.

This collaborative ownership could even extend to include third-party platforms that host or sell the art. In doing so, we create a multi-tiered system that recognizes the contribution of each party — Jane, the AI developers, and the hosting platform — without diluting the essence of individual creativity.

The key benefit? It would also set a precedent for future works and offer an initial framework for lawmakers to consider as they inevitably play catch-up with technological advancements.

The Deepfake Dilemma: Yes, That Video of You Eating a Bat is Convincing

Meet Alex, our next protagonist. He’s a software engineer who’s always been cautious about his digital footprint.

He uses VPNs, ad-blockers, the whole nine yards.

But one fine day, he discovers a viral video featuring “him” singing opera — poorly, I might add — while riding a skateboard and periodically yelling slurs.

The video is a deepfake, and Alex suddenly becomes an overnight internet sensation for all the wrong reasons. If a tech-savvy individual like Alex can fall victim, what chance do the rest of us have?

Waking Up to a Bad Reality: Deepfakes Are No Joke

It’s easy to dismiss deepfakes as the latest toy for internet pranksters. But they’re far from harmless.

A study by the Center for a New American Security underscored the potential risks of deepfakes in misinformation campaigns. It’s not just about public shaming; it’s about the distortion of truth on a monumental scale.

And let’s not kid ourselves; current authentication and verification methods are as effective as a sieve holding water.

Fighting Fire with Fire: Mandatory Watermarking & AI Detection

In the war against deepfakes, we might have to use the very technology that creates them.

Consider a universal watermarking system that’s as standardized as, say, the “Organic” label on your kale.

This watermark could interact with AI algorithms trained to spot and flag deepfake content. It’s like having a smoke alarm that not only detects fire but also throws a bucket of water on it.

But let’s not stop there. Platforms hosting videos should be legally obligated to employ deepfake-detection algorithms.

Imagine YouTube or Facebook having a deepfake alert similar to their copyright infringement detection systems. It wouldn’t just save the Alexes of the world from unwarranted embarrassment; it could prevent the spread of malicious falsehoods that could potentially disrupt elections or ruin lives.

While this isn’t a foolproof solution, it’s a start. And sometimes, that’s all we need to turn the tide in a seemingly insurmountable battle.

AI’s Dirty Little Secret: Your Data is Biased and You Know It

Meet Emily. Emily is a top-tier data scientist who takes immense pride in her work. She’s been developing an AI algorithm designed to write news articles.

But as soon as the prototype is launched, it starts churning out biased articles leaning heavily towards a particular political ideology.

Emily is aghast. She followed every rule in the book, so what went wrong? Upon further inspection, she realizes that the problem isn’t her code; it’s the biased data sets she’s been feeding the algorithm.

Welcome to the shadowy underbelly of AI — where the data can, and probably will, betray you.

Your Data is Flawed, and Here’s Why That’s Dangerous

As algorithms become more advanced, the notion that “AI is only as good as the data it’s trained on” has never been truer.

A study from MIT revealed that facial recognition algorithms displayed gender bias, misidentifying non-white individuals at a higher rate than their white counterparts.

The issue is endemic, and the implications are massive. It’s not just about social equity; it’s about the integrity of data-driven decisions that affect us all, from job applications to criminal sentencing.

Going Beyond Token Measures: Data Audits & Ethical AI Councils

The issue of biased data isn’t something that can be patched up with a quick fix — it demands a structural overhaul.

Emily, in our hypothetical story, takes the radical step of instituting an ‘Ethical AI Council’ within her organization. Composed of individuals from diverse backgrounds and fields, this council audits data sets for bias and makes recommendations on data sourcing and usage.

But let’s take it a step further. What if this wasn’t just an organizational initiative but a federally mandated requirement?

Every AI project, whether public or private, could be required to submit a ‘Bias Impact Assessment’ alongside their algorithms. Failure to meet a certain ethical standard would mean no green light for the project until biases are adequately addressed.

And this goes both ways. My belief, is that it is absolute necessity keep data driven AI models as unbiased as possible, and frankly, that would be a huge benefit to global societies. Your news feed today is most likely lying to you, not out of malice, but to quench it’s preprogrammed, unquenchable quest for your attention and engagement.

I don’t know about you, but I want my future AI serving me up stuff that is true, not stuff that infuriates or depresses me into clicking on an image that is also most likely fake.

Wrapping Up the Ethical Rollercoaster: Where Do We Go from Here?

If you’ve made it this far, I applaud you. The unsettling ethical dilemmas and legality issues surrounding AI aren’t fun, and frankly, they stress me the F*** out.

But then I think about the other side of the coin. The side where we use the unimaginable intelligence that only AI can provide to solve these challenges. And in doing so, we create a world of abundance, equity, happiness, and health. Despite my sometimes sardonic writing, I’m massively bullish on an AI enabled future. I’ve spent the last year writing hundreds of pieces of educational content to help people learn and understand how to use AI to change their lives.

I’ve come to the realization that knowing is just the first step. The key takeaway here is the need for proactive measures, be it redefining ownership models, cracking down on deepfakes, or cleansing our data sets of deeply ingrained biases of all types.

The future of AI is, quite literally, in our hands.

The question is, are we brave enough to steer it in the right direction, or will we let it plunge into the abyss of ethical ambiguity?

The choice is yours — choose wisely.

--

--

Rob Young
ILLUMINATION

AI and ML enthusiast | Striving to be an unbiased thought leader | Global Tech Product Leader and Strategist