How Media and Influencers 10xing Their Revenue With Ai…by Lying to You About It
The lucrative business of lying to you about AI
Raise your hand if you’ve heard one of the following-
- If you’re not this with ChatGPT, you’ll fall behind.
- Use these 20 AI tools to become a one-man media company.
- Make 10K/month with GPT-4.
Or perhaps your circles prefer these topics-
- We are at the model singularity, and super-intelligent models will present an existential threat to humanity.
- Soon AI will come and replace large swaths of jobs, leaving people unemployed and in no position to catch up.
- Models like GPT will be implemented in cases where we can cause a lot of harm and ‘unravel the fabric of society’.
To a layperson, this might seem confusing. On one hand, we have people proclaiming that these models are the greatest thing since electricity, and will change the way business is done (you’ll become a millionaire by mastering ChatGPT). On the other- GenAI is going to morph into something more and Terminator all of us. Turns out that training models on large swathes of human data is an existential risk (wonder what that says about humanity).
I’m going to let you in on a secret- these seemingly opposite stances are both achieve at the same thing- making money from you. In this article, I will be going over how AI Misinformation- both the hype and the negativity- are both leveraged by people and media companies to generate an environment of fear and misinformation to keep you confused and ultimately extract your most valuable resources- your time, money, and attention- from you.
Want to know more? Keep reading 👇👇
How AI Misinformation is Profitable
Before getting into the specifics, let’s do a general overview of how people profit from AI Misinformation. Ultimately, any form of AI Hype, whether positive or negative does three things-
- Creates a sense of urgency- If you’re not using GPT, you’re going to fall behind …Our minds are very weak to this kind of suggestion, so the second we hear this - our minds scramble to take action. Or take the negative hypers - who will have you convinced that AI is an existential threat that will destroy societies. No better to mobilize people than to tell them that something is about to destroy their lives (either because something is so good that you’ll fall behind if you don’t use it, or because something is so bad that it might destroy humanity).
- Distorts Expectation- How many people are trying to send you guides on how to do 10x more with AI? If they really were that productive, why aren’t they implementing the advice themselves and producing more? Often this comes with an attempt to sell you something- a course, product, or just their social media presence. How many AI Experts have come crawling out of the woodwork recently, who will change your life if you followed their social media? Ultimately they all claim that you (with no real skills )will do well in minutes what people (with actual skills) struggle with for hours.
- Glamorizes certain factors-An offshoot of the last point, but we often see misinformation glamorize certain parts of the job while ignoring the realities of the situation. This is used to sell a lifestyle to people. The prominent example of this is all the entrepreneur/day-trader courses which claim that they will make you 6-figures with a few short hours of work. In AI, we see this with courses that will get you ‘industry-ready’ from scratch in 60 days or influencers who make it seem like the only requirement in becoming a Machine Learning Engineer is to copy LangChain videos or Medium blogs online.
Ultimately, all of this causes a triple whammy of problems-
- The Misinformation acts as a red herring to the very real issues present in these systems. In one of the greatest twists of irony- the Criti-Hype spread by many ‘AI Alignment/Safety people’ cause the reckless adoption of AI systems because developers are too busy fighting the imagined demons to pay attention to the real ones.
- The misinformation preys on the people who are most vulnerable to it- beginners, people in difficult situations, and other people who need a way out. These people lack the resources to think critically to filter out scams, and to advocate for themselves when things don’t work out. Sadly, many of these victims are gaslighted into believing that the failure was their fault, and not the result of inflated expectations/predatory promises.
- The market is flooded with people with a half-understanding of topics and fields. These people often lack the baseline fundamental knowledge to be able to identify gaps in their skills and iteratively improve. This makes the situation worse for all stakeholders involved- including these people who are now stuck in a field that does not align with their abilities/interests and are always a step behind.
The gurus selling you all of this don’t care. They sell you false promises and leave you to deal with the consequences. I will now be going over some egregious examples that demonstrate what we’ve discussed and how it causes some serious problems.
AI Hype & Criti-Hype
AI Hype: Overconfident techies bragging about their AI systems (AI Boosterism).
AI Criti-Hype: Overconfident doomsayers accusing those AI systems of atrocities (AI Doomerism).
-Weiss-Blatt. I like her phrasing, so we’ll work with that.
How AI Hype is used to make money from you
Ultimately, both kinds of AI hype are attention-grabbing and rile you up emotionally- putting you in the perfect state for a sale. Let’s investigate how the proponents of both kinds of hype benefit from their respective kinds of hype-
How to Profit off AI Hype
The people feeding the AI hype cycle benefit from this in several key ways. Firstly, take LinkedIn influencers like Zain Kahn. He used to brand himself as a marketing/productivity guy on LinkedIn - but after the advent of ChatGPT, Zain has now evolved into the final form of all self-improvement bros these days- the ChatGPT guru. Now calling himself ‘The AI Guy’, Zain tells you how to 10x your productivity with 20 new AI tools every week. Take a look at Zain’s recent post, where he announced the death of search and Google (not his first time FYI)-
If you know about AI, this should immediately ring a lot of alarm bells. And I could talk about how Google has very advanced LLMs of its own, or that search and Language Models do very different tasks. But we’re going to keep it simple. Let’s look at the first revolutionary plugin Zain shills- AskYourPDF which will answer questions based on a PDF. Take a look at the following image from Microsoft’s fluff piece on GPT-4, where they claimed that GPT-4 exhibited AGI (read about it here). In their own document, they made it very clear that even with very explicit prompts and simple information, GPT made things up. So please tell me about the reliability of the AskYourPDF tool. To verify the authenticity of the outputs, you would have to read through the texts anyway, which defeats the point of AI to begin with.
But maybe the AskYourPDF team had done what no one else has been able to do and fixed the LLM hallucination (if you want to know why it happens, read this). And they were really that good. So I decided to test things out myself. I plopped over to their website and dropped my free ebook created by compiling a few of my articles on AI. Then, I asked it a simple question to find out the free GPT competitors that you can use (to see the answer read this article or download the book linked prior). I then asked a simple question, deliberately using the wording from the ebook. And it kept failing.
This due diligence took me 3 minutes. And it’s not like this is a super niche issue- we’ve known about Transformers hallucinating for a year, at least (that’s when I came across it, I’m sure others have known it longer). Someone calling themselves ‘The AI Guy’ should at least give a disclaimer. So why doesn’t he?
Turns out there’s a lot of money to be made in this space. I have a comparatively small newsletter (33K subs), but I have been offered sponsorship opportunities close to USD 6k/month to promote these products. If you’d like to see how much Zain charges for his ads, take a look below.
As a true blue capitalist, I spiritually resonate with a consumeristic society’s drive to monetize every experience of life and sell it as a subscription service. I have nothing against Zain monetizing his content (I do so myself). However, Zain does so through hype, misinformation, and a complete disregard for nuance. This is scammy, and that’s something I believe should be called out. The lack of disclosure on any of the AI Products he promotes (both that he was sponsored by them and disclosures about the risks of these products) is extremely problematic and signals two possibilities-
- Zain knows the risks of these products but chooses not to mention them.
- Zain does not understand these products enough to have nuanced discussions about them.
Neither is a good look. Zain is far from the only influencer to have come out of the woodwork to try and profit from the AI Hype. Open social media, and we see tons of posts pitching GPT/Gen-AI as the silver bullet to everything from boosting your sales, helping you evaluate the pros and cons of complex decisions, to even helping build entire businesses, and learning the hardest ideas.
A while back, I covered how influencers were using the Deep Learning hype to sell you false promises. In it, I covered the video- BUILD and SELL your own A.I Model! $500 — $10,000/month (super simple!) created by the popular coding YouTuber, Code with Ania Kubów (and courses promising to teach you Data Science in 6 months). Ultimately, this hype is the next stage, selling you false promises of magical AI in order to sell you products, courses, etc.
I will end this section with an observation. The whole GPT/Gen-AI hype cycle reminds me of the cryptocurrency scams of 2021–22. They both follow the same playbook- build a lot of hype around a technology/idea few understand, promote these products through social media false prophets pretending to be experts in the domain (Graham Stephan and co for Crypto, Zain and his brethren for AI), and use social proof to paper over any gaping holes in the fundamental solutions. And just like the Crypto scam- ordinary people like you or I will be the ones that are screwed. However, the consequences will be far worse, since unlike Crypto- AI is already being recklessly integrated into various parts of society.
Because these people often use their social proof to sell their products. They rely on massive ad campaigns, where they pay a lot of celebrities, outlets, and other influencers, to promote their products. Regular Folks following these people/organizations come across these promotions and google the original product. There, they see the paid releases + massive following and think this product is legit. So now real people put real money into this project, which further increases the hype behind this, forming a strong feedback loop.
- How Crypto/Gen AI prey on people. To learn more read- SBF is a fraud. But he was never the problem
I’m going to end this section here. Let’s now cover the twin to this- AI Criti-Hype.
How People Profit from AI Criti-Hype
Just as with AI-Hype, Criti-Hype is a very lucrative business. If people already believe that AI is all-powerful, it’s not hard to convince them that AI will come and destroy their lives. The sad thing is that these people often position themselves as the voice of reason against the hype. By positioning yourself as an edgy counter-culture rebel, it’s easy to attract the attention created by AI Hype and sell your big scary security risks. From here, the playbook is the same- generate hype, use that to cash in that sweet social media clout through speaking engagement, products, and courses, and enjoy your fame and riches. This is why we see established names air pieces like ‘AI is like Human Intelligence’. Because it’s sexy, generates clicks, and gets people emotionally riled up. Perfect when you want to sell them something.
The unfortunate thing about this is that AI has very real societal risks and harms. Take a look at this beautiful chart giving us a comprehensive overview of some of the risks posed by AI-
And go back to the proponents of AI Criti-Hype trying to warn you against the dangers of AI. Rarely, if ever, do they talk about these issues and how they are causing harm right now. Instead they choose to focus on largely fantastical scenarios- that hold little bearing to reality.
For a prominent example of this situation, think back to **that open letter** talking about the need to pause AI more powerful than GPT-4. In the insightful article- A misleading open letter about sci-fi AI dangers ignores the real risks, the authors very rightly pointed out that the letter and its proponents were misleading the conversation, away from the issues. They had a beautiful table summarizing the contrast between the real and speculative risks involved with these AI tools-
Fear is a powerful marketing tool and criti-hype is just that- a marketing ploy to gain attention and power. It is used to slow down the developments, centralize the power over AI to certain groups, and make everyday people cower about this scary new thing that we must be protected from. Take a look at this letter that Yann LeCun received about how AI Doomerism ruined his life-
Long story short, a little over 2 years ago I was doomscrolling about AI gone rogue scenarios, reading lesswrong articles about the typical paperclip argument ; a superintelligence will come along, you tell it ‘make me happy’ and boom, the AI stimulates the part of your brain that produces dopamine for eternity.
This triggered a *beyond severe* panick attack where I basically gave up on life out of the fear of some rogue AI that will just keep me alive forever and ever. Every time I tried to confront my fear the concept of living for eternity against my will overwhelmed me. I somehow got through school while being quasi paralysed 24/7 the entire year. Now although my anxiety symptoms are still the same, and I can’t go to school or do almost anything, recently I’ve regained hope that my life doesn’t have to end here
-Just like positive hype bankrupted people, doomerism causes serious harm to people. Read the full letter here
In the interest of conciseness, I’ll end this section here. One thing should be clear- both kinds of hype ultimately accomplish the same thing. And they follow the same playbook to ensnare you into their hype-cycles. Before we finish, there is one final thing about misinformation that I would like to cover- how misinformation hides real problems with systems. This section is especially important for Instagram Activists reading this, since y’all often contribute to this situation.
How misinformation hides the real problems with a system
When misinformation dominates the discourse of a topic, good-faith debates on improving the systems become hard to do. Take the example of AI Art, which was the internet’s favorite hobby before ChatGPT became the thing. I saw multiple people critiquing these AI Art systems by claiming that these were ‘copying’ human artists in their datasets, and that AI Art Generators like Midjourney or Stable Diffusion were just copying their training samples and making minor tweaks to them. These people wanted to ensure artists were compensated because of this.
If you understand how AI Generators work, claims of this nature are untrue. And this is doubly unfortunate because there is actual stealing from the artists when it comes to building AI art generators, just not in the way that these activitists online claim (the real method involves using a nonprofit front to go around copyright law, something much subtler than naked theft). By focusing on the simplistic ‘AI steals’ you make it easier for the corrupt systems to deflect critisicm on semantics and technicalities. If the goal is to make an actual difference, it’s not ideal. To those interested in how the stealing actually happens and what can be done to fix it- read Artists enable AI art — shouldn’t they be compensated? that I wrote for the legendary AI Publication, TheGradient.
Unfortunately we are seeing the same with AI these days. The loudest voices are the ones that are selling hype and this is influencing real-life action. Hopefully, we see analysis from more sane voices peep through and counter this. In the meanwhile, you have the playbook you can use to start your own mini hype-cycle.
That is it for this piece. I appreciate your time. As always, if you’re interested in reaching out to me or checking out my other work, links will be at the end of this email/post. If you like my writing, I would really appreciate an anonymous testimonial. You can drop it here. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow.
Save the time, energy, and money you would burn by going through all those videos, courses, products, and ‘coaches’ and easily find all your needs met in one place at ‘Tech Made Simple’! Stay ahead of the curve in AI, software engineering, and the tech industry with expert insights, tips, and resources. 20% off for new subscribers by clicking this link. Subscribe now and simplify your tech journey!
Using this discount will drop the prices-
800 INR (10 USD) → 640 INR (8 USD) per Month
8000 INR (100 USD) → 6400INR (80 USD) per year
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
To help me understand you fill out this survey (anonymous)
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819