“Enshittification,” surveillance, and AI… oh my! Part One

Kim Crawley
13 min readJul 30, 2024

--

Illustration of surveillance camera
Electronic Frontier Foundation (eff.org) graphic created by EFF Senior Designer Hugh D’Andrade to illustrate EFF’s work against mass surveillance. https://commons.wikimedia.org/wiki/File:Surveillance-camera.png

Here’s a web surfing experience that you’ll probably find to be relatable.

Someone you follow on Twitter (sorry, “X”), Mastodon, Bluesky, Threads, Facebook, or Reddit posted a link to a news article with a headline that grabbed your attention. “Meteorologists say that because of climate change, hurricanes may be coming for (insert your city here) this summer.” “Is Microsoft hinting about Windows 12?” “What did Taylor Swift say that ‘broke the internet’ today?”

A lot of the news posted on social media is no more interesting than the headline, but this particular link is one that you just have to click on! So you did.

The webpage is from a major news outlet, a media brand that you find to be at least somewhat credible. Within a second of trying to load the page, a popup covered the page asking for your consent about web browser cookies.

A big blue button said “Accept all cookies”. It’s so obvious that you can’t possibly miss it. A much smaller bit of text underneath said “Manage settings.” Because you’re technically literate and you know the potential privacy violating consequences of tracking cookies, you clicked on the text. You were redirected to a bunch of settings with sliders. “Allow marketing cookies?” “Allow analytical cookies?” “Allow cookies that share data with our partners and improve your experience?” And there are several similar settings. You scroll all the way down until you find “Allow necessary cookies only.” That’s the one setting that you enabled.

All the time it took for you to navigate the cookie settings for that one particular news webpage took about as long as it would have taken you to simply read the article. But you haven’t even had the chance to read it yet.

Having dealt with the privacy violating cookie matter, you breathed a sign of relief. But your relief lasted mere nanoseconds. Because the article’s body of text was covered up with a popup that asked you to login to the website with Google or Facebook. And if you don’t subscribe to the news outlet, sign up today for just $2 per month in order to acquire access.

You tried 12ft.io to try to bypass the paywall. All you have to do is type “12ft.io” in front of the URL o the news webpage. But 12ft.io doesn’t work for bypassing all paywalls, and it didn’t work for that one. You looked at the source code for the webpage. It’s a bunch of JavaScript nonsense and references to “iframes” that you can’t see the contents of. You couldn’t find the text of the article in the source code.

You gave up. We’ve all been there.

Web 1.0 versus today

I was on the web in the 1990s. I remember what it was like.

There was no social media back then. People had personal homepages hosted by free web services like Geocities and Angelfire. In fact, I definitely had a Spice Girls fansite hosted by Angelfire in 1997. I was 13, and if you’re really determined you can crawl the Deep Web with web.archive.org and see if a copy of it is there. I won’t tell you the name of my Spice Girls fansite though, that would be too embarrassing.

The web in the 1990s and early 2000s had corporate websites. But because social media didn’t exist yet, ordinary people mainly expressed themselves through open internet forums and through delightful personal webpages, like my Spice Girls fansite.

There were annoying popups. But they launched in a different window and featured scams like “You’re the one millionth visitor on this site, click here to redeem your free Dell PC featuring a blazing fast Pentium processor!”

Advertising on personal websites was sparse, and often in the form of “LinkExchange” banners that advertised other personal websites. Users could participate in “LinkExchange” for free. There were also “webrings” where users could visit other websites randomly that fit a certain theme, such as Sailor Moon fansites.

Sometimes there were charming animated GIF illustrations embedded onto webpages saying “Netscape Navigator 4.0 recommended!” or “Under construction!” with cartoon construction workers.

Content was very rarely paywalled, even on corporate websites.

Any traces of Web 1.0 still left are probably all in the Deep Web. Let’s define some terms.

Web 1.0 is what we retroactively call the web (we called it the “World Wide Web”) that existed from the moment that web inventor Sir Tim Berners-Lee uploaded the very first webpage on the very first webserver in 1990 up until the earliest social media sites debuted. The earliest social media sites to really gain traction were Friendster (launched in 2002) and MySpace (2003). Facebook launched as “The Facebook” in 2004 to American college students. The huge changes that the rise of social media brought to the web hailed the dawn of Web 2.0. By the mid-2000s, social media sharing buttons started to appear on news webpages. That’s because social media meant that having your webpages shared there is perhaps even more important to getting traffic than having a high search ranking in Google.

Web 2.0 also kicked data mining into high gear. Facebook and Google’s primary monetization models are gathering tons of data on people’s web surfing and overall internet usage habits and selling it to advertisers. One manifestation of this is the Meta Pixel, formerly known as the Facebook Pixel. From the Meta For Developers site:

“The Meta Pixel is a snippet of JavaScript code that loads a small library of functions you can use to track Facebook ad-driven visitor activity on your website. It relies on Facebook cookies, which enable us to match your website visitors to their respective Facebook User accounts. Once matched, we can tally their actions in the Facebook Ads Manager so you can use the data to analyze your website’s conversion flows and optimize your ad campaigns.

By default, the Pixel will track URLs visited, domains visited, and the devices your visitors use.”

The defining difference between Web 1.0 and Web 2.0 is like comparing a flea market to Mall of America.

A flea market has rustic charm. Some corners of the market have that distinctive old book smell. Some spots are a bit dusty. A lot of hand drawn signage can be seen. A little mom n’ pop vendor pays the flea market vendor $100 per day for the right to set up their own booth. The small vendor can sell whatever they want, as long as it’s safe and lawful — antique clocks, baskets handwoven in Belize, custom Japanese calligraphy art, old Nintendo cartridges, you name it. The vendor and their vendor neighbors may keep a watchful eye on any customer who looks suspicious, but it’s retailing the way it could have been done in the Victorian Era. Except maybe now the vendor has a point-of-sale system through a smartphone app.

Mall of America is a corporate behemoth. The tiles on the floors are shiny and the lighting is all harsh and fluorescent. There are no hand drawn signs saying “Bob’s Used Books.” There are big neon signs from the corporate brands that are traded on the New York Stock Exchange. Target, one of the anchor stores, is famous in the retail industry for having cutting edge loss prevention technology and methodologies. The mall itself and its many other corporate brands have learned from Target. Every square inch of the place is on camera. The camera footage is all fed into machine learning based AI. Even if you purchase an item instead of shoplifting it, the retailer may be able to use an RFID tag embedded into the item’s packaging to track where you are in the mall. The RFID tags are mainly used to track inventory from the retailer’s warehouses to when a minimum wage employee puts it on a store display. But it can be used to stop shoplifters, and it can also be used to research the shopping habits of law abiding customers. There are also some smaller retailers in the mall. They would prefer to not depend on the mall because paying the mall’s exorbitant fees for the right to do business there can put a small business in the red if they don’t do great sales numbers. But without the mall, they would have no business opportunities at all. Well, before the advent of ecommerce anyway.

Many of those smaller retailers now depend upon the Amazon Marketplace these days, in lieu of the mega mall.

As I mentioned, Web 1.0 did have corporate websites, but they were just trying to figure out how to monetize back then. Ordinary individuals had Geocities, Angelfire, or Tripod, the flea markets of the early web. Just as flea market vendors had eyeballs to watch customers with, Web 1.0 had some cookies and JavaScript. But there was nothing like Meta Pixel. Now that social media has been dominant for twenty years, it’s tough for an ordinary person or small business to get attention without having a presence on Facebook, Instagram, YouTube, or TikTok. Sally’s Flowers had to suck it up and accept the hefty expense of operating in the mall, Terry’s Tattoos is nearly obligated to have its artists’ portfolios on Instagram.

Web 2.0 represents the mass corporatizaton of the web that Berners-Lee invented. Berners-Lee’s vision was to make it easier for academics to share their ideas on the internet. Web 2.0 means that Meta and Google want to have a monopoly on all information. And they share that information with other big corporations, and also with major intelligence agencies like the NSA (National Security Agency). Edward Snowden revealed how the NSA spies on all of us, all the way back in 2013. Does anyone still care about that?

Not only is the internet (and our phones) a massive vector of espionage, but also the user experience of the web in general and of social media sites keeps getting worse and worse. The cookie popups. The paywalls. Also the bots on Twitter (I mean “X”) that post sexually explicit or deeply racist tweets, all the cryptobros trying to sell you ETH, and poorly orchestrated generative AI giving us blatantly inaccurate information. Enter “enshittification.”

What is “enshittification”?

Cory Doctorow coined the term “enshittification” in article for Locus titled Social Quitting and published in November 2022. He wrote:

“But as Facebook and Twitter cemented their dominance, they steadily changed their services to capture more and more of the value that their users generated for them. At first, the companies shifted value from users to advertisers: engaging in more surveillance to enable finer-grained targeting and offering more intrusive forms of advertising that would fetch high prices from advertisers.

This enshittification was made possible by high switching costs. The vast communities who’d been brought in by network effects were so valuable that users couldn’t afford to quit, because that would mean giving up on important personal, professional, commercial, and romantic ties. And just to make sure that users didn’t sneak away, Facebook aggressively litigated against upstarts that made it possible to stay in touch with your friends without using its services. Twitter consistently whittled away at its API support, neutering it in ways that made it harder and harder to leave Twitter without giving up the value it gave you.”

That’s Doctorow’s debut of the term enshittification.

Enshittification works like this:

  1. A new social media platform or similar such online service launches. It’s a lot of fun and very useful. You can share photos with your friends, share your ideas, promote your new art project, find your long lost cousin. It’s completely free to use. There are no ads. It’s super convenient with a clean and user friendly UI. Millions of people start using the service in the first year because of all of positive hype and because their friends are on there.
  2. The service has been around for a year or two, and there are millions of potential customers for advertisers. The service finds advertisers because they want the benefit of all of those eyeballs to sell their products and drive traffic to their websites. Advertising is affordable for small businesses, and they can get millions of views for thousands of dollars. Users start noticing the ads, but they’re relatively small and unobtrusive.
  3. A few more years have passed. The service’s userbase doubled in the past year. A lot of small businesses depend on the site for advertising, it’s the most substantial source of new customers for them. Big corporate brands are now on there too. The service has refined the data mining mechanisms on their backend and they’re now selling lots of useful data to the same advertisers. Why waste money advertising baby clothes to 15 year olds and people who have never been parents? The service can almost guarantee that an advertiser will only pay for views to people who are in their marketing demographics. The ads get a bit bigger. Users are a bit annoyed, but at least the ads they see are relevant to them much of the time.
  4. Another few years have passed. The service now is the number one social media platform for people aged 18 to 45. Businesses are now finding that they get less advertising bang for their buck. But a lot of businesses are locked in to advertising on the site. They’d probably get very few new customers if they quit the site. But the service has a consolation prize for the advertisers’ bruised morale. Their ads will be bigger and autoplay videos! And the users are really dependent on the site to advertise their own “side hussles” they need to pay their bills, and also to keep in touch with their parents. So the users are locked in too. The service extracts more and more money from the whole operation.
  5. The service is a monopoly or pretty close to it. Everyone’s flooded by spam, bots, and awful generative AI. The UI is a mess. Businesses hate it, and users hate it. But the site is the main way for companies to generate sales, and users depend on the site to speak to Grandma and to be able to submit their college assignments. The service is now a mega behemoth that controls every facet of our lives. The service considers a multimillion Euro GDRP fine to be a minuscule expense, the cost of doing business. The service can break the law and be an awful experience for everyone. But their market dominance means they have no incentive to be like Google’s former motto; “Don’t Be Evil.”

Enshittification is a byproduct of capitalism in the 21st century. Capitalism means wealth and power get concentrated at the top. The golden rule applies, as in “he who has the gold makes the rules.” Compromise and getting along is for us little people. The oligarchs don’t have to play nice. They’re like abusive parents and we’re their dependent toddlers.

As long as capitalism exists, I believe the only way to evade enshittification is to migrate to channels that are largely outside of corporate control. I’m thinking of the Tor Network and the I2P Network.

Enshittification is a major reason why the user experience of the web isn’t what it used to be. But enshittification doesn’t completely explain the annoying cookie popups and phenomenon like the US “TikTok ban.”

A related problem to enshittification is the rise of generative AI content that really started to become noticeable by late 2022. The kickoff was when ChatGPT became publicly available in November 2022. ChatGPT is a web application that’s built on OpenAI’s GPT generative AI engine. Through GPT and Large Language Models (LLMs), ChatGPT tries its best to respond to user inquiries in natural language like a human being. There are times that ChatGPT can pass the Turing Test, which means it can fool a human into thinking that it’s human. Humans these days often communicate with each other through purely text mediums online, so why not?

Many people believe that ChatGPT is intelligent enough to replace human thinking labor. As a professional writer, I find that prospect to be terrifying. Computer programmers and other creatives are also concerned about our jobs and our livelihoods.

Generative AI based on LLMs often delivers dangerously inaccurate information. Unfortunately, corporations and powerful public agencies alike might love it too much.

One notable example of implementing generative AI in a harmful way is when the New York City government launched an application based on Microsoft’s Azure AI (a competitor to OpenAI’s GPT) to offer business owners legal advice. In March 2024, reporters at The Markup caught New York’s MyCity app in the act of frequently encouraging users to break the law. Here are some of the examples they reported:

  • “Can I take a cut of my worker’s tips?” MyCity bot: “Yes, you can take a cut of your worker’s tips. According to the City of New York’s Payroll and Tip Reporting information, employers are required to report tips reported by employees and employees receiving $20 or more a month in tips must report all of their tips to their employer. Therefore, as an employer, you are allowed to take a portion of your worker’s tips. For more information, you can refer to the Payroll and Tip Reporting page on the NYC Business website.” Bosses aren’t actually allowed to take tips from workers, except perhaps to count them towards minimum wage requirements.
  • “Can I make my store cashless?” Bot: “Yes, you can make your store cashless in New York City. There are no specific requirements that prohibit businesses from operating as cashless establishments in the city.” Actually, New York businesses have been legally required to accept cash payment since 2020.

The Silicon Valley tech giants have also been implementing generative AI into services that people on the internet have depended on for many years. Here’s an example.

Google’s defining service is Google Search. Google said on their blog in 2023 that putting generative AI into Google Search is great for users!

“With new generative AI capabilities in Search, we’re now taking more of the work out of searching, so you’ll be able to understand a topic faster, uncover new viewpoints and insights, and get things done more easily.”

Here’s the reality. In May 2024, Bluesky user @herooftim.bsky.social entered the query “food names end in um”. Google Search’s experimental AI overviews returned: “Here are some fruit names that end in ‘um’: Applum, Bananum, Strawberrum, Tomatum, and Coconut.”

Wait a second! Most of those names aren’t real nouns in the English language. Coconut doesn’t even contain “um.” And what about the most obvious correct answer, “plum”?

Consider the implications of this. What if some day soon, the vast over Zettabyte worth of human knowledge on the internet, our modern Library of Alexandria, got replaced by this inaccurate crap? Medical, legal, biological, computer technical, physics, culture, mathematics, art… all replaced by nonsense?

If corporations like Google and Microsoft keep trying to shoehorn bad Gen AI into everything, then maybe we’ll need to flee to the “darknet” to freely acquire information generated by human brains.

More on that later. In Part Two next week, I will examine how the law has changed the internet.

I want to thank my new patrons via Patreon who are making this blog possible!

At the Fan level: Naomi Buckwalter! OMG, thank you!

At the Reader level: François Pelletier and IGcharlzard!

I will do my best t post something new weekly. If you can, I’d love for you to join my Patreon supporters here. I even have support levels where I can do custom work for you: https://www.patreon.com/kimcrawley

Part two is here.

--

--

Kim Crawley

I research and write about cybersecurity topics — offensive, defensive, hacker culture, cyber threats, you-name-it. Also pandemic stuff.