AI Risks: Strategic Shifts Businesses are Making

Fraud prevention amongst the top priorities for industry leaders

Kumar Sanjog
Yugen.ai Technology Blog
6 min readApr 26, 2024

--

The world was still prepping to combat the conventional fraud methods and the advancements in AI has given another potent tool in the hands of bad elements. Although Agents, Bots, AI generated content are meant for good, they also exemplify how technology can be misused for malicious purposes.

Adoption of GenAI is growing at a pace that has never been seen for any technology in past. And with every new technology comes its adverse implications as well. Several leaders have already realized how big a threat it is to user experience and their ROI. And have started taking steps to combat it before it’s too late.

Projection of Market Penetration of AI Generated Content

In this article I am going to shed light on how risks of AI are leading to big strategic shifts for businesses in different sectors by taking a few examples in content, social media and advertising space.

Interesting insight to see here is that across all these sectors genuine users will also feel the heat of the new strategies being adopted to prevent fraud.

What’s happening at Medium with AI generated content?

I received an email from Medium a few days back highlighting changes in its AI content policy. It states -

“we will also now revoke Partner Program enrollment for writers who act in ways “that demonstrate clear misalignment with our mission” by publishing AI-generated writing”

Books, blogs, movies are reflections of our creative quotient, experiences, imagination and emotions. They play an indispensable role in our development, both as an individual and a society. Therefore, it is an important step by Medium to safeguard the sanctity of the content space for it to continue playing the same role for future generations.

However, an interesting point Medium brings up next -

“today it’s difficult to definitively determine whether something is created by AI, so we’re focussing on identifying and distributing the best quality writing”

Medium AI Content Policy

An important point to note here is the inability to prevent fraud has led to a change in strategy. The focus has shifted from fraud identification to quality (good) identification.

Soon, the content recommendation and distribution system of Medium will go through an overhaul affecting the blogs you publish as well. Now, the assumption on which this is based is that AI generated content will inherently be different from human written content. But will that always be the case? If not then what other parameters can the Medium AI team use to correctly identify authentic & quality writing that AI can’t mimic. It’s a tough nut to crack but definitely a critical problem to be solved.

Here is what X is doing!

Do you remember the widely debated topic of fake accounts or “bots” during X acquisition? Elon’s claim was Twitter is flooded with bots affecting ad sales, driving sentiments during major global events such as elections.

“Musk said his deal to buy Twitter can’t move forward unless it provides public proof that less than 5% of its accounts are fake or spam”

May 2022, Issue of Bots During Twitter Acquisition

In order to create a safe and open platform for people to exchange views without any bias and also for businesses to create good ROI, fraud prevention has always been amongst Elon’s top priorities.

Several major steps have been taken to combat it in the last 2 years. However, since bots are able to easily pass through authentication checks, X has also given up and is now resorting to new methods to combat fake accounts.

“X is going to start charging new users a small fee to authenticate themselves.”

“Unfortunately, a small fee for new user write access is the only way to curb the relentless onslaught of bots,” Musk said on April 15, 2024

Apr 2024, Elon on Battle with Bots

Here as well, the inability to identify fake accounts in time has led to such a drastic shift in the operating model of the company.

In many countries where the media is dominated by a few elites or governments, X is the only platform that still appears to stand on foundations of ethics and that impartially gives voice to everyone. However, will it remain the same platform if the audience of those voices (new users) start being charged? How many twitter users in developing economies will pay for social media?

Either it’s unlikely for X to continue being the same platform for users with this new strategy or else this is not a sustainable operating model. This can easily be solved if we are able to design better fraud prevention systems to combat rising risk due to AI.

What is happening in advertising?

At Yugen.ai , we have been working with several adtech leaders, gaming publishers & advertisers on fraud prevention use cases. The solutions have definitely helped in direct cost savings of millions of dollars. However, across all our partners, top priority has always been to ensure genuine users are not affected due to ML models incorrectly blocking their transactions/conversions. The gray area leads to several low risk fraudulent transactions also getting the benefit of doubt.

In one of my recent conversations with CTO of a leading adtech player -

We discussed moving away from a “default convert” to “default block” strategy, meaning when there is even slight suspicion of risk, block first and then ask to prove genuinity.

This is very different from how the world has operated in the past. Risk solutions have always been customer centric giving benefit of doubt to the customers to make sure genuine ones are not affected, even when it costs the business. But this is a clear example from horse’s mouth that the rising risk due to advancements in technology is asking for a change.

How does it affect us? Earlier, few bad elements were getting converted. Now, few genuine users may get blocked. Is this approach sustainable? Probably not. Fraud is an extremely rare event scenario and even slightly poor precision can affect hundreds of thousands of genuine users. But to shield ROI from rising risks while better systems are being setup, such measures when calibrated can add value. While this conversation was only for one specific type of fraud, it was interesting to see the shift in approach and thought process of those leading the industry.

What does all this tell us about the future?

Every sector is facing threat in some form due to advancements in AI which is hurting user experience, ROI and partner trust. Today, we are not in a state where AI based fraud can be stopped in its entirety, however proactive measures can definitely minimize the risk, better the user experience and save millions of dollars. Those who lead the industry are already showing us the way in above examples.

One can definitely question whether above measures by Medium, X or Adtech are sustainable -

  1. Can AI not mimic content written by humans?
  2. Can X really charge users without compromising its USP?
  3. Can AdTech companies really move to a “default block” strategy for suspicious transactions?

The strategic shifts definitely raise concerns and are most likely not sustainable. But, there is no denying that it’s the need of the hour while the industry prepares itself with better fraud prevention solutions. There is an imminent need for the business leaders to buckle up and take proactive measures to combat the risks before it’s too late.

Yugen.ai is a trusted AI partner for global leaders in Gaming and Adtech. Connect with Kumar Sanjog to talk about applications of AI in Fraud Prevention.

--

--