Featured
Finally, an internet standard for writers’ rights vs. AI companies
The launch of the Really Simple Licensing standard and an update on Medium’s policies for AI companies using your writing
The way AI companies use your writing continues to be an evolving issue. This is the latest in a series of updates from us that started in 2022 with a call for consent, credit, and compensation, and continued recently with an open discussion of how, when, or if AI is used in writing here on Medium.
Today, we are announcing our participation in the launch of a new internet protocol that will give Medium and other internet companies a standardized way to control how AI companies use your writing, what your rights are, and what you get in return if you opt in to allowing AI companies to use your writing.
Specifically, the new standard gives us a way to tell AI companies the answers to these three questions about your rights and whether you deserve credit and compensation for your writing:
A) Should AI companies have access to your writing if they offer nothing in return, specifically offering neither credit nor compensation? Our current stance is no.
B) Should AI companies have access to your writing if they offer credit in the form of sending readers to your writing? Our current stance is yes.
C) Should AI companies have access to your writing if they pay you money for it? The simple part of our stance is that Medium doesn’t sell your data or content, so we’d only allow access if we could find a way to pass all the money back to writers. The complicated part is that we know some companies are paying, but we don’t know how much. So the question is more conceptual: Can Medium pursue a negotiation on your behalf? We assume yes, but we want to do that transparently before we start.
The standard launching today is called Really Simple Licensing (RSL). This protocol, built with the support of a consortium of internet companies including Medium, Reddit, Quora, Yahoo, and O’Reilly, allows you as the content owner to define the rights and restrictions of AI companies to use and train on your writing.
Prior to RSL, our options at Medium were to block AI companies manually (which is what we were doing) or join the Cloudflare initiative to charge or block AI crawlers (which wouldn’t allow us to accept payments for individual stories, only at the site-wide level).
To start, we’ve implemented the simplest version of this new RSL Standard, which prohibits AI companies from using your stories to train their AI models but allows them to summarize and link back to your writing in AI-generated search results. This is a direct extension of our current policy, which we launched in 2023.
Back then, we noted that there was an issue of basic fairness. AI companies were profiting off your writing and giving nothing back (other than an influx of AI-generated slop). So we put out a call for AI companies to offer consent, credit, and compensation. Behind the scenes, we tried to get major internet companies to work together. It’s wild to us that it took this long to get to a formal protocol. In our view, there was never a viable negotiating strategy unless we all band together. So we’re happy, finally, to support a formal, standardized way to for all content owners to tell AI companies the rights and restrictions on your content.
At Medium, we are trying to navigate this issue on your behalf and come to thoughtful default policies. Our original position in 2023 felt straightforward: Why would anyone participate in allowing AI companies access to their writing there was no value coming back to the community here? The situation has changed quite a bit and now there is some value coming back in the form of visitors to your writing and the potential for some financial compensation.
To that end, I want to talk to all of you about the current state of the credit that is coming with this standard (in the form of citations to and views on your writing), and what we think the upcoming options will be for financial compensation. If we can get clear on credit and compensation, then RSL gives us a way to define consent.
When AI companies credit you, you get more readers
One of the major use cases for AI companies right now is that they are acting as search engine replacements. Examples of this include: ChatGPT, Perplexity, Google Gemini. Originally, these AI search replacements were just offering up their own generated answers without giving any citations to the stories that they had trained on. But more recently, it’s become common that they both credit the source material (you) and drive clicks to that source material.
So what we were asking AI companies for in terms of credit effectively means that we want them to help promote your writing and send you readers.
For example, earlier this year, ChatGPT was the fastest growing source of referral traffic for Medium writers. ChatGPT is about 1.3% as big of a source of readers as Google, and, importantly, those readers are 4x more likely to convert into Medium members.
So far, OpenAI has also done the best job of giving content creators a lever for consent. In particular, they have separate rules for letting us block their agent for training on your writing while allowing it to include a credited citation to your writing in their search results. The RSL Standard will let us formalize those rules for all AI companies.
This might sound like splitting hairs if you aren’t up on the way that AI companies operate. In the most simplistic view, there are often two parts. The first part is the LLM, which is a giant statistical model of human writing representing trillions of data points, none of which can be cited back to source material. In the ChatGPT case, we are blocking their ability to train their LLM on your writing because they are not offering any citation or compensation in return.
The other part is a context window where the AI retrieves a smaller amount of content in response to your search and has the LLM analyze it. These context windows do allow for credit in the form of citations and are specifically why ChatGPT is now sending so many readers to Medium.
Given that people publish on Medium to get readers, we think the obvious default stance is to allow a product like ChatGPT to use your writing in their AI-generated search results if they are sending significant traffic in return. We are hoping that Google will catch up to OpenAI— so far, traffic from Gemini AI summaries is very poor.
For now, we are going to keep choosing options that optimize for people visiting and reading your writing.
But we’re also keeping in mind that many of you may want to opt out of participating in this new AI ecosystem even if opting out leads to fewer readers for you. In fact, the new RSL Standard specifically contains a section that would allow this level of fine grained control where some writers opt out while most opt in. I personally contributed this section to the standard knowing that some Medium writers would want to opt out.
We think opting out is probably not a particularly effective protest given how much content these AI companies have access to. That’s why we’ve always been angling for an industry-wide solution like RSL. But I also understand that some people may have other reasons to opt out. So I have a request: If you think you would opt out of this option for your writing on Medium—no to training but yes to appearing as a citation in results—could you leave a response saying why?
The AI companies should pay compensation
Now to the second part, about payment. The RSL Standard includes two options for negotiating financial compensation from AI companies.
In the best case scenario, the RSL Standard will lead to a direct payment to you based on the way AI companies use your writing. This future possibility comes because the standard also defines a separate non-profit RSL Collective agency to negotiate, collect, and distribute compensation on behalf of the entire internet. Think of it as similar to ASCAP, the musical rights organization that negotiates with venues, broadcasters, and streaming services on behalf of musicians.
We lean toward internet standards and think the RSL Collective is probably the better approach for the overall health of the internet. But we also understand that the full adoption of this may be a ways off.
In the meantime, the RSL Standard also allows for a simple way to indicate to AI companies that we are willing to negotiate. This part of the standard is simply a listing for a contact form.
We’d like to do this, do it transparently, and do this with the goal of passing the compensation on to you. In our original position, we called this Negotiating as a Service. Some fights are too small for individuals. But we think this one is only medium difficulty for a company like Medium.
As with credit in the form of traffic, the RSL Standard is flexible enough to allow individual writers to opt out. So here again, a request: If you think you would opt out, can you say why? Leave a response, I’ll be reading them.
Wherever you personally land, I hope you’ll agree that the RSL Standard (and the RSL Collective) represents a much-needed and meaningful step forward in clarifying the relationship between writers and AI companies. We see it as a framework that lets us answers some fundamental questions, and provides a mechanism for giving writers the consent, credit, and compensation they deserve.

